College student put on academic probation for using Grammarly: ‘AI violation’::Marley Stevens, a junior at the University of North Georgia, says she was wrongly accused of cheating.

  • bool@lemm.ee
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    87
    ·
    10 months ago

    You may not agree with the policy or the tools used, but the rules were clear, and at this point she has no evidence that she did not use some other Generative AI tool. It’s just her word against another AI that is trained to detect generated material.

    What is telling is her reaction to all of this, literally making a national news story because she was flagged as a cheater. I promise if she wasn’t white or attractive NY Post wouldn’t do anything. What a massive self own. Long after she leaves school this story will be the top hit on a google search of her name and she will out herself as a cheater.

    • Even_Adder@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      80
      ·
      10 months ago

      You shouldn’t put too much stock in these detection tools. Not only do they not work, they flag non-native English speakers for cheating more than native speakers.

      • rottingleaf@lemmy.zip
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        10 months ago

        they flag non-native English speakers for cheating more than native speakers.

        Yes, and for me as a former it’s absolutely clear why - because I’m doing the same thing as a generative model, imitating text in another language. Maybe with more practice in verbal communication and being more relaxed I could reduce this probability, but the thing is this is not something which should affect school tests at all.

        These are people trying to use a specific kind of tools where it’s fundamentally not applicable.

    • testfactor@lemmy.world
      link
      fedilink
      English
      arrow-up
      39
      arrow-down
      5
      ·
      10 months ago

      What clear rule did she violate though? Like, Grammerly isn’t an AI tool. It’s a glorified spell check. And several of her previous professors had recommended it’s use.

      What she did “wrong” was write something that TurnItIn decided to flag as AI generated, which it’s incredibly far from 100% accurate at.

      Like, what should she have done differently?

    • HACKthePRISONS@kolektiva.social
      link
      fedilink
      arrow-up
      30
      arrow-down
      4
      ·
      10 months ago

      i don’t believe she cheated, but i also don’t care.

      i do think being a conventionally attractive blonde did help her get coverage.

      i also want turn it in to die in a fire.

      i’m very conflicted about your comment, but i’m not conflicted about this situation at all: stop using turn it in, and put the girl back in school.

      • rottingleaf@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        i do think being a conventionally attractive blonde did help her get coverage.

        No doubts, though conventions are a bit different where I live.

        If that puts pressure on something systematic, that’d mean someone’s individual attractiveness being good for everybody.

    • j4k3@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      3
      ·
      10 months ago

      I can make an offline AI say absolutely anything in any way shape or form I would like. It is a tool that improves efficiency in those smart enough to use it. There is nothing about it that is different than what a human can write.

      This is as stupid as all of the teachers that used to prevent us from using calculators for math 20 years ago. We should be encouraging everyone to adapt and adopt new technology that improves efficiency, and take on the real task of testing students with intelligent adaptive techniques. It is the antiquated mindset and academia that is the problem. Anyone that can’t adapt should be removed. When the student enters the workforce, their use of such efficiency improving tools is critical.

      • mriormro@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        6
        ·
        edit-2
        10 months ago

        Writing a paper isn’t about efficiency, it’s about forcing you to synthesize concepts and ideas such that they become more concrete in your mind. It, in itself, is the learning tool. It isn’t something to be checked off and chruned through like a widget you make at a factory.

        Your comment just sounds like you lack, I don’t know, care in regards to learning.

        • j4k3@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          10 months ago

          You need to sit down with an offline LLM and learn what they can actually do. It is not good at doing the work for you. It is excellent at helping you explore yourself in countless ways you can never access on your own. It can answer all of the questions you don’t quite understand as you try and navigate a new subject. It is easily able to amplify and accelerate the learning process. It can be abused like anything, but there is nothing new about that.

          The articles and framing of AI as something bad is all coming from manipulation of the media by Altman and company. It is about trying to control the next tech monopoly that will dominate the next decade. It is already too late for that though. Open Source offline AI will beat what Open AI has tried to control. Yann LeCunn is the person to watch in this space. He is a Bell Labs alumni pushing open source AI as the head of Meta AI. If you know anything about the current digital age, that combination of someone from the old Bell Labs pushing open source to lead an industry without trying to monopolize it should mean a great deal.

          AI is not really super capable like some kind of AGI. It is like Stack Overflow or old forum threads level helpful with complex tasks. It is also a mirror of both the datasets culture and person that creates the prompts. It is only as good as your vocabulary and ability to understand its idiosyncrasies while communicating on a level of openness that humans are not accustomed. This is an evolved tool. It is not AGI. It is not persistent. It can not learn on its own. There are very real limitations with how much information can be processed at once, and limitations for niche information. This is no time to be a Luddite. It is still an order of magnitude less capable than a human but offers access to tailored information on a level that has only been available to the super rich that hire tutors for their children any make major donations to institutions in the real “cheating” of the system you will never be able to object to.

          I greatly value learning, so much so, that I jumped at the opportunity to have custom tailored learning the second I had the chance. It ended up being even better than I expected. There are scientific models and several ways to setup a model with your own documents where it can answer questions and cite sources.

    • theneverfox@pawb.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      … No proof she didn’t? What could possibly prove that?

      Can you give me an example of this proof? And if so, is that something reasonable for a student to have?

      Seriously, think it through.

      • medgremlin@midwest.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        10 months ago

        If you write something in Word or an equivalent program, there will be metadata of the save files that shows creation and edit timestamps. If they use something like Google Docs, there’s a very similar mechanism via the version history. I actually had the metadata from a Word document be useful in a legal case.

        • theneverfox@pawb.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Ok, and that’s proof of what exactly? That you made the file when you said you did?

          Not to mention, you can set those to whatever value you want

          I can see how it could be part of a court case, because it’s one more little corroborating detail. It doesn’t prove anything though

            • theneverfox@pawb.social
              link
              fedilink
              English
              arrow-up
              2
              ·
              10 months ago

              A quick search shows you can edit this as well… That is interesting though, I didn’t know it existed

              Give me a couple hours and I could build something that makes pastes appear to be keystrokes. Give me a weekend, and I can build something mathematically indistinguishable from a human typing that will hold up to intense scrutiny

              It still doesn’t prove anything, it’s just one more piece of circumstantial evidence. Still, it’s not unreasonable to paste the full text into it, or mix and match. Maybe you don’t have word installed on your computer - I don’t, I haven’t since I was in school myself. It’s reasonable to use word on school computers but do all of the work on an online text editor, then pasting into word on a school computer

    • rottingleaf@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      10 months ago

      You may not agree with the policy or the tools used, but the rules were clear,

      OK, if you’ll be consistent and agree that using Taro cards to determine who’s cheating is normal, if rules say that.

      and at this point she has no evidence that she did not use some other Generative AI tool

      Your upbringing lacks in some key regards.

      It’s just her word against another AI that is trained to detect generated material.

      There are (or should be) allowances for the degree of precision where any tool can be trusted. If it is wrong in 1% of cases - then its use is unacceptable. In 0.1% - acceptable only if she doesn’t argue it. In 0.01% something - acceptable with some other good evidence.

      I’ll help you become a bit less of an ape and inform you that an “AI” (or anything based on machine learning) can’t be used as a sole detector of anything at all.