• bassomitron@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    6
    ·
    edit-2
    10 months ago

    This was inevitable, not sure why it’s newsworthy. ChatGPT blew up because it brought LLM tech to the masses in an easily accessible way and was novel at the mainstream level.

    The majority of people don’t have a use for chat bots day-to-day, especially one that’s as censored and outdated as ChatGPT (its dataset is from over 2 years ago). Casual users would want it for simple stuff like quickly summarizing current events or even as a Google search-like repository of info. Can’t use it for that when even seemingly innocuous queries/prompts are met with ChatGPT scolding you for being offensive, or that its dataset is old and not current. Sure, it was fun to have it make your grocery lists and workout plans, but that novelty eventually wears off as it’s not very practical all the time.

    I think LLMs in the form of ChatGPT will truly become ubiquitous when they can train in real time on up-to-date data. And since that’s very unlikely to happen in the near future, I think OpenAI has quite a bit of progress left to make before their next breakout moment comes again. Although, Sora did wow the mainstream (anyone in the AI scene has been well aware of AI generated video for awhile now), OpenAI has already said they’re not making that publicly available for now (which is a good thing for obvious reasons unless strict safety measures are implemented).

    • paddirn@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      10 months ago

      I’m genuinely surprised anytime I get anything remotely useful from any of the AI chatbots out there. About half the responses are beyond basic-level shit that I could’ve written on my own or just found by Googling it, or it’ll give just plain wrong information. It’s almost useless with important, fact-based information if you can’t trust any of its responses, so the only thing it’s good for is brainstorming creative ideas or porn, and the majority of them out there won’t touch anything even mildly titillating, so you’re just left with this overly sensitive chatbot that takes about as much work to craft a good prompt as it would to just write the answer out yourself.

      I tried playing a game of 20 Questions with one of them (my word was “donkey”, it was way off and even cheated a bit) and it kind of scolded me at the end because I told it the thing wasn’t bigger than a house, as if I was the one who got that fact wrong.

      • rottingleaf@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Same, I don’t like my own habit of compulsively writing long nervous texts, but the side effect is that I can write quicker and easier myself most of what people want from LLMs.

        • paddirn@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          10 months ago

          I tend to overwrite in everything that I do, I actually kind of enjoy writing. Coming up with ideas and concepts and stuff is probably one of the more enjoyable aspects of the creative process for me, it’s about exploring possibilities and discovery and you never know where you’ll end up. Brainstorming is partly about making connections between seemingly unrelated things. Having a chatbot just blurt out a bunch of lazy half-formed ideas seems more counter-productive to me, it kind of taints the pool of ideas before you’ve even started. You’re starting off having to sift through a bunch of lazy ideas to try to find anything of value.

          The image generation stuff is fun though, it’s interesting how what it comes up with sometimes, but the LLM text shit is just not there yet.

          • rottingleaf@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            What you describe is fun, being a chat bot yourself, habitually putting together words and pictures you don’t quite understand, is not, that’s what I’m talking about.

    • blargerer@kbin.social
      link
      fedilink
      arrow-up
      11
      ·
      10 months ago

      The P in GPT is Pretrained. Its core to the architecture design. You would need to use some other ANN design if you wanted it to continuously update, and there is a reason we don’t use those at scale atm, they scale much worse than pretrained transformers.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      10 months ago

      It’s not exactly training, but Google just recently previewed a LLM with a million-token context that can do effectively the same thing. One of the tests they did was to put a dictionary for a very obscure language (only 200 speakers worldwide) into the context, knowing that nothing about that language was in its original training data, and the LLM was able to translate it fluently.

      OpenAI has already said they’re not making that publicly available for now

      This just means that OpenAI is voluntarily ceding the field to more ambitious companies.

      • bassomitron@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        10 months ago

        Gemini is definitely poised to bury ChatGPT if its real world performance lives up to the curated examples they’ve demostrated thus far. As much as I dislike that it’s Google, I am still interested to try it out.

        This just means that OpenAI is voluntarily ceding the field to more ambitious companies.

        Possibly. While text to video has been experimented with for the last year by lots of hobbyists and other teams, the end results have been mostly underwhelming. Sora’s examples were pretty damn impressive, but I’ll hold judgment until I get to see more examples from common users vs cherry-picked demos. If it’s capable of delivering that level of quality consistently, I don’t see another model catching up for another year or so.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          10 months ago

          Sora’s capabilities aren’t really relevant to the competition if OpenAI isn’t allowing it to be used, though. All it does is let the actual competitors know what’s possible if they try, which can make it easier to get investment.

          • bassomitron@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            10 months ago

            They are allowing people and companies to use it, it’s just limited access. I do not think it’s a good idea for them to open it to the public without plenty of safeguards. Deep fakes are becoming way, way too easy to manufacture nowadays, and I’m in no hurry to throw even more gasoline on a fire that’s already out of control.