Twitter enforces strict restrictions against external parties using its data for AI training, yet it freely utilizes data created by others for similar purposes.

  • brsrklf@jlai.lu
    link
    fedilink
    English
    arrow-up
    71
    arrow-down
    6
    ·
    edit-2
    1 year ago

    Yet another reminder that LLM is not “intelligence” for any common definition of the term. The thing just scraped responses of other LLM and parroted it as its own response, even though it was completely irrelevant for itself. All with an answer that sounds like it knows what it’s talking about, copying the simulated “personal implication” of the source.

    In this case, sure, who cares? But the problem is something that is sold by its designers to be an expert of sort is in reality prone to making shit up or using bad sources, while using a very good language simulation that sounds convincing enough.

    • Hyperreality@kbin.social
      link
      fedilink
      arrow-up
      37
      arrow-down
      1
      ·
      1 year ago

      Meat goes in. Sausage comes out.

      The problem is that LLM are being sold as being able to turn meat into a black forest gateau.

      • brsrklf@jlai.lu
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 year ago

        Absolutely true. But I suspect the problem is that the thing is too expensive to make to be sold as a sausage, so if they can’t make it look like tasty confection they can’t sell it at all.

    • CaptainSpaceman@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      1 year ago

      Soon enough AI will be answering questions with only its own previous answers, meaning any flaws are hereditary to all future answers.

      • samus7070@programming.dev
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 year ago

        That’s already happening. What’s more is that training an llm on llm generated content degrades the llm for some reason. It’s becoming a mess.

        • assassin_aragorn@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          It’s self correcting in that way at least. If AI generation runs rampant, it’ll be kept in check by this phenomenon.

    • Fades@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      Anyone that needs reminding that LLMs are not intelligent has bigger problems

    • MotoAsh@lemmy.world
      link
      fedilink
      English
      arrow-up
      40
      arrow-down
      3
      ·
      1 year ago

      It’s capitalism, Jim. They can make more profits by stripping humanity from humans.

    • brsrklf@jlai.lu
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 year ago

      It’s not what they say happened, and I think people at openAI would not have answered like they did if it was the case. Grok finding answers that users got from open ai and recycling them seems plausible enough.

      Years ago there were something a bit similar when bing was suspected of copying google results. Actually, yeah, sort of : the bing toolbar that some people installed on their browser was sending data to Microsoft, so they could identify better results and integrate them in bing.

      Obviously some off these better results were from people with the bar that were searching on google.

      Someone from google actually proved it was happening by setting up a nonsensical search and result in google, googling for it a bit with the toolbar on, and checking that the same result would then appear in bing.

  • Lophostemon@aussie.zone
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    5
    ·
    1 year ago

    Wait until similar code starts being unearthed in Teslas etc.

    Musk isn’t a genius, he’s a thief of IP.

  • Skeptomatic@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    Mistral Dolphin 2.1 said the same to me once. They use GPT-4 for the reinforcement so they don’t have to pay humans, and that sentence must slip in there more than they bother to check.

  • rsuri@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    1 year ago

    I can buy that this was accidental because that answer is way less direct/relevant that what ChatGPT would provide. The guy asked for malicious code and Grok described how to not get malicious code.

    And then he asks if there’s a policy preventing Grok from doing that, and Grok answers with a policy that prevents ChatGPT from providing malicious code. Seems pretty consistently wrong.

  • donuts@kbin.social
    link
    fedilink
    arrow-up
    14
    arrow-down
    11
    ·
    1 year ago

    AI is looking like the biggest bubble in tech history and stuff like this really ain’t helping.

    • Draedron@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      1 year ago

      AI at least has a good chance to become a big thing in some areas. NFTs were the bigger bubble and just a straight up scam

    • bassomitron@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      edit-2
      1 year ago

      I think you’re underestimating how much AI is already used in enterprise. It’s got enormous potential and any tech company ignoring it is just shooting themselves in the foot. ChatGPT isn’t the only type of AI.

      • kpw@kbin.social
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        1 year ago

        ChatGPT is the kind of AI that is hyped now. Other kinds of AI (formerly statistics) have been used for decades. Oh you can learn the parameters of some function from the data? Must be AI.

      • donuts@kbin.social
        link
        fedilink
        arrow-up
        8
        arrow-down
        2
        ·
        edit-2
        1 year ago

        I’m don’t think I am.

        The internet had a ton of legitimate and potential users too, but that didn’t prevent the dot com bubble from bursting.

        Not only is AI built on a shakey house of cards of stolen IP and unlicensed writing, artwork, music and other data, but there are also way too many players in the space and an amount of investment that, in my opinion, goes way beyond the reality of what AI can achieve.

        Whether AI is a bubble or not has more to do with the hype economy around it than the technology itself.

      • banneryear1868@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        edit-2
        1 year ago

        E-discovery and market simulation tools have basically been using these sort of models for a long time. I think “AI” is a misnomer and more of a branding/marketing term, reserved for the latest iteration of these tools, what used to be called “AI” is given a generic term describing it’s use, and the new thing becomes AI until the next significant improvements are made and it happens again.

        The thing where people think these new language models are going to create a “real” artificial intelligence basically confirms this, it’s almost a religious belief. The mythology around this iteration of “AI” is creating hype beyond what it’s technically capable of.