• lunarul@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      https://www.lifewire.com/strong-ai-vs-weak-ai-7508012

      Strong AI, also called artificial general intelligence (AGI), possesses the full range of human capabilities, including talking, reasoning, and emoting. So far, strong AI examples exist in sci-fi movies

      Weak AI is easily identified by its limitations, but strong AI remains theoretical since it should have few (if any) limitations.

      https://en.m.wikipedia.org/wiki/Artificial_general_intelligence

      As of 2023, complete forms of AGI remain speculative.

      Boucher, Philip (March 2019). How artificial intelligence works

      Today’s AI is powerful and useful, but remains far from speculated AGI or ASI.

      https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf

      AGI represents a level of power that remains firmly in the realm of speculative fiction as on date

      • masonlee@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Ah, I understand you now. You don’t believe we’re close to AGI. I don’t know what to tell you. We’re moving at an incredible clip; AGI is the stated goal of the big AI players. Many experts think we are probably just one or two breakthroughs away. You’ve seen the surveys on timelines? Years to decades. Seems wise to think ahead to its implications rather than dismiss its possibility.

        • lunarul@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          See the sources above and many more. We don’t need one or two breakthroughs, we need a complete paradigm shift. We don’t even know where to start with for AGI. There’s a bunch of research, but nothing really came out of it yet. Weak AI has made impressive bounds in the past few years, but the only connection between weak and strong AI is the name. Weak AI will not become strong AI as it continues to evolve. The two are completely separate avenues of research. Weak AI is still advanced algorithms. You can’t get AGI with just code. We’ll need a completely new type of hardware for it.

          • masonlee@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            Before Deep Learning recently shifted the AI computing paradigm, I would have written exactly what you wrote. But as of late, the opinion that we need yet another type of hardware to surpass human intelligence seems increasingly rare. Multimodal generative AI is already pretty general. To count as AGI for you, you would like to see the addition of continuous learning and agentification? (Or are you looking for “consciousness”?)

            That said, I’m all for a new paradigm, and favor Russell’s “provably beneficial AI” approach!

            • lunarul@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 months ago

              Deep learning did not shift any paradigm. It’s just more advanced programming. But gen AI is not intelligence. It’s just really well trained ML. ChatGPT can generate text that looks true and relevant. And that’s its goal. It doesn’t have to be true or relevant, it just has to look convincing. And it does. But there’s no form of intelligence at play there. It’s just advanced ML models taking an input and guessing the most likely output.

              Here’s another interesting article about this debate: https://ourworldindata.org/ai-timelines

              What we have today does not exhibit even the faintest signs of actual intelligence. Gen AI models don’t actually understand the output they are providing, that’s why they so often produce self-contradictory results. And the algorithms will continue to be fine-tuned to produce fewer such mistakes, but that won’t change the core of what gen AI really is. You can’t teach ChatGPT how to play chess or a new language or music. The same model can be trained to do one of those tasks instead of chatting, but that’s not how intelligence works.

              • masonlee@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                10 months ago

                Hi! Thanks for the conversation. I’m aware of the 2022 survey referenced in the article. Notably, in only one year’s time, expected timelines have advanced significantly. Here is that survey author’s latest update: https://arxiv.org/abs/2401.02843 (click on PDF in the sidebar)

                I consider Deep Learning to be new and a paradigm shift because only recently have we had the compute to prove its effectiveness. And the Transformer paradigm enabling LLMs is from 2017. I don’t know what counts as new for you. (Also I wouldn’t myself call it “programming” in the traditional sense— with neural nets we’re more “growing” AI, but you probably know this.)

                If you are reading me as saying that generative AI alone scales to AGI, we are talking past each other. But I do disagree with you and think Hinton and others are correct where they show there is already some form of reasoning and understanding in these models. (See https://youtu.be/iHCeAotHZa4 for a recent Hinton talk.) I don’t doubt that additional systems will be developed to improve/add additional reasoning and planning to AI processes—and I have no reason to doubt your earlier assertion that it will be a different additional system or paradigm. We don’t know when the breakthroughs will come. Maybe it’s “Tree of Thoughts”, maybe it’s something else. Things are moving fast. (And we’re already at the point where AI is used to improve next gen AI.)

                At any rate, I believe my initial point remains regardless of one’s timelines: it is the goal of the top AI labs to create AGI. To me, this is fundamentally a dangerous mission because of concerns raised in papers such as Natural Selection Favors AIs over Humans. (Not to mention the concerns raised in An Overview of Catastrophic AI Risks, many of which apply to even today’s systems.)

                Cheers and wish us luck!

                • Rhaedas@kbin.social
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  10 months ago

                  There are two dangers in the current race to get to AGI and in developing the inevitable ANI products along the way. One is that advancement and profit are the goals while the concern for AI safety and alignment in case of success has taken a back seat (if it’s even considered anymore). Then there is number two - we don’t even have to succeed in AGI for there to be disastrous consequences. Look at the damage early LLM usage has already done, and it’s still not good enough to fool anyone who looks closely. Imagine a non-reasoning LLM able to manipulate any media well enough to be believable even with other AI testing tools. We’re just getting to that point - the latest AI Explained video discussed Gemini and Sora and one of them (I think Sora) fooled some text generation testers into thinking its stories were 100% human created. In short, we don’t need full general AI to end up with catastrophe, we’ll easily use the “lesser” ones ourselves. Which will really fuel things if AGI comes along and sees what we’ve done.

        • conciselyverbose@kbin.social
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          10 months ago

          This is like saying putting logs on a fire is “one or two breakthroughs away” from nuclear fusion.

          LLMs do not have anything in common with intelligence. They do not resemble intelligence. There is no path from that nonsense to intelligence. It’s a dead end, and a bad one.