• HeyListenWatchOut@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    edit-2
    4 months ago

    I hate to break this to everyone who thinks that “AI” (LLM) is some sort of actual approximation of intelligence, but in reality, it’s just a fucking fancy ass parrot.

    Our current “AI” doesn’t understand anything or have context, it’s just really good at guessing how to say what we want it to say… essentially in the same way that a parrot says “Polly wanna cracker.”

    A parrot “talking to you” doesn’t know that Polly refers to itself or that a cracker is a specific type of food you are describing to it. If you were to ask it, “which hand was holding the cracker…?” it wouldn’t be able to answer the question… because it doesn’t fucking know what a hand is… or even the concept of playing a game or what a “question” even is.

    It just knows that it makes it mouth, go “blah blah blah” in a very specific way, a human is more likely to give it a tasty treat… so it mushes its mouth parts around until its squawk becomes a sound that will elicit such a reward from the human in front of it… which is similar to how LLM “training models” work.

    Oversimplification, but that’s basically it… a trillion-dollar power-grid-straining parrot.

    And just like a parrot - the concept of “I don’t know” isn’t a thing it comprehends… because it’s a dumb fucking parrot.

    The only thing the tech is good at… is mimicking.

    It can “trace the lines” of any existing artist in history, and even blend their works, which is indeed how artists learn initially… but an LLM has nothing that can “inspire” it to create the art… because it’s just tracing the lines like a child would their favorite comic book character. That’s not art. It’s mimicry.

    It can be used to transform your own voice to make you sound like most celebrities almost perfectly… it can make the mouth noises, but has no idea what it’s actually saying… like the parrot.

    You get it?

    • BluesF@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 months ago

      LLMs are just that - Ms, that is to say, models. And trite as it is to say - “all models are wrong, some models are useful”. We certainly shouldn’t expect LLMs to do things that they cannot do (i.e. possess knowledge), but it’s clear that they can do other things surprisingly effectively, particularly providing coding support to developers. Whether they do enough to warrant their energy/other costs remains to be seen.