• huginn@feddit.it
    link
    fedilink
    English
    arrow-up
    182
    arrow-down
    9
    ·
    1 year ago

    Friendly reminder that your predictive text, while very compelling, is not alive.

    It’s not a mind.

    • Poggervania@kbin.social
      link
      fedilink
      arrow-up
      86
      arrow-down
      3
      ·
      1 year ago

      Cyberpunk 2077 sorta explores this a bit.

      There’s a vending machine that has a personality and talks to people walking by it. The quest chain basically has you and the vending machine chatting a bit and even giving the vending machine some advice on a person he has a crush on. You eventually become friends with this vending machine.

      When it seems like it’s becoming more apparent it’s an AI and is developing sentience, it turns out the vending machine just has a really well-coded socializing program. He even admits as much when he’s about to be deactivated.

      So, to reiterate what you said: predictive text and LLMs are not alive nor a mind.

      • billwashere@lemmy.world
        link
        fedilink
        English
        arrow-up
        22
        ·
        1 year ago

        Which is why the Turing Test needs to be updated. These text models are getting really good at fooling people.

        • bionicjoey@lemmy.ca
          link
          fedilink
          English
          arrow-up
          18
          ·
          1 year ago

          The Turing test isn’t just that there exists some conversation you can have with a machine where you wouldn’t know it’s a machine. The Turing test is that you could spend an arbitrary amount of time talking to a machine and never be able to tell. ChatGPT doesn’t come anywhere close to this, since there are many subjects where it quickly becomes clear that the model doesn’t understand the meaning of the text it generates.

          • Corgana@startrek.website
            link
            fedilink
            English
            arrow-up
            7
            ·
            edit-2
            1 year ago

            Exactly thank you for pointing this out. It also assumes that the tester would have knowledge of the wider context in which the test exists. GPT could probably fool someone from the middle ages, but that person wouldn’t know anything about what it is they are testing for exactly.

      • penguin@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        33
        arrow-down
        3
        ·
        1 year ago

        Well no one can prove they have a mind to anyone other than themselves.

        And to extend that, there’s obviously a way for electrical information processing to give rise to consciousness. And no one knows how that could be possible.

        Meaning something like a true, alien AI would probably conclude that we are not conscious and instead are just very intelligent meat computers.

        So, while there’s no reason to believe that current AI models could result in consciousness, no one can prove the opposite either.

        I think the argument currently boils down to, “we understand how AI models work, but we don’t understand how our minds work. Therefore, ???, and so no consciousness for AI”

        • treefrog@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Consciousness seems to arise from a need for sense of time and space. Navigation basically. Finding home, finding food, finding mates. I say this after decades of exploring mind altering substances and going on a decade of nearly daily meditation practice.

          A friend of mine suggested it’s just this simple and that even worms are conscious. They’re aware of themselves to some degree and the when and the where. I’m sure they experience things way different than us, having different senses for assessing when and where and a different neuro structure for processing information from their bodies and the environment.

          So, no point beyond consciousness being more common than I think people assume and actually not that difficult to define.

          Consciousness is the sense of time and space. And most animals seem to have it. Do machines? I don’t know enough about the technology to have an educated opinion.

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 year ago

            sense of time and space

            Mammalian intelligence is based on repurposing spatial mapping circuitry but that’s not consciousness itself, that is, the Miller Number: 7±2 things we can keep in conscious at the same time. That sense of time of space has a very specific quality to its qualias, they’re all, well, spacious. That thing as “just the room, no map in it” is also part of the Buddhist Jhanas (“boundless empty space”), but there’s plenty of stuff going on in the mind that isn’t part of that – say, the pure impression of “bright” when your SO dares open the window blinds does not have a navigational “bright from the window which is in that direction” to it, that’s an additional layer, a where, on top of the primitive what.

            My best inference is that the function of consciousness is to flexibly make connections between different parts of the whole, and that on the level of learning / writing memory instead of automatic response: It is, in fact, possible to avoid running into a lamppost without being conscious of it, been there, done that, the let’s call it motor cortex first acting and then making me conscious, as if to say “have I been a good boy?”. That is it’s actually a quite passive process, being thrown left and right by systems wanting to do some connection, and shouldn’t be equated with will at all.

            • treefrog@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 year ago

              If you’re familiar with Buddhism then you’re familiar with the six and eight consciousness models?

              Like in your lamppost example, I would argue part of you (body and eye consciousness) were quite conscious of the lamppost even if the consciousness mind was paying more attention to something else. Keeping as much of the senses (including sense of mind) in mind as you’re able to based on the depth of your practice and guarding against distractions away from what is happening now, is mindfulness.

              In the eighth consciousness model, again in your lamp post example, we could say the seventh consciousness was occupied chasing after the past or future and mindfulness was barely present. Thankfully your other consciousnesses reacted and kept you safe. Manas becomes aware of this after the fact because its nature is ignorance.

              The eighth consciousness is the base. The root. It’s more fundamental than I making. Which is probably what you were doing when you nearly walked into something. Thinking about what you’re doing later. I should do some laundry when I get home, maybe?

              People mistake sense of agency (I making, manas, ego) with the base of consciousness. But consciousness is effortless and grasping at me and mine takes effort, its just more subtle effort than most people are aware of. When this grasping stops, awareness continues. In my personal experience.

              So, I think it’s possible machines are conscious. If they have a sense of agency maybe the question Western science and the media keep asking. Maybe they just don’t have the models or personal experience to delineate between ego and consciousness. The people asking I mean. Hence the we don’t even know what consciousness is bit I keep hearing. Maybe not Western science. But human beings have been exploring these questions with the tools of Buddhist practice for 2500 years. I trust their definitions and they passed my own smell test.

              • barsoap@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                I would argue part of you (body and eye consciousness) were quite conscious of the lamppost even if the consciousness mind was paying more attention to something else.

                That’s semantics. My major objection to that kind of definition is that it knows no bound and distinction: Where do you stop assuming consciousness? Electrons are reacting to, influencing, and interacting with other electrons, is that also a form of consciousness? One could say so, but then everything is conscious which is the same thing as saying as nothing is conscious because without anything to delineate, terms are meaningless. I prefer language such as that what you call “body and eye consciousness” has agentive properties, that it can learn, that it generally wants to cooperate and be of service to the whole, such things. Lumping it up with consciousness risks confusing interpretations of messages of the thing (which is all we’re ever conscious of) for the thing itself.

                guarding against distractions away from what is happening now, is mindfulness.

                What was happening then is that I was using the way from home to the supermarket to think about code, with ample trust in mind so that I did not fear the lamppost. What good would have keeping my consciousness on the external world have done? The body/eye did not need integrative oversight, while my modelling mind very much could use a helping hand. Imposing it on the former and denying it to the latter would’ve been inflicting violence on myself.

                Be careful to not moralise around “distraction”. Bluntly said when your teacher chided you for day-dreaming you probably weren’t distracted you were thinking about something more pertinent to your immediate development than calculus. Where discipline in directing consciousness comes into play is keeping your mind free from neurosis, within parameters in which you use your faculties according to their nature, as well as self-conditioning, e.g. if you’re addicted to potato chips, make sure that a) you don’t deny yourself potato chips and b) eat. every. potato. chip. with. full. consciousness. That’s to connect the act of eating up those chips to all the negative opinions you have about your behaviour, instead of it being only connected to something maladaptive. Scientifically proven and neurologically explained that and how that works, btw. In that sense “distraction” is “false, incomplete, sense of comfort”.

                • treefrog@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  Also my last post was purely in regards to the first part of yours. I appreciate the insight into moralizing distraction and will retead it when I’m not distracted by the meat of our interesting conversation.

                • treefrog@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  1 year ago

                  Okay.

                  So mind consciousness trusted body/eye consciousness. I know what you mean, I dance and do this to enter flow state.

                  In the early Buddhist model consciousness would be the aggregate of the six sense consciousness. In the eight consciousness model the seventh consciousness might identify more strongly with one of these six, generally mind.

                  The store consciousness is the aggregate of all eight and that’s what I’m arguing is fundamentally what all experience arises from. The perception of emptiness, i.e. no self (consciousness itself is an aggregate and can’t be separated from its objects) and impermanence (change or time). Sense of time and space. To be conscious is to be aware of something. Movement through electrical synapses stimulated by sense impressions, even just the impression of sound from our own thoughts or the impression of limitless space in the fifth jhana.

                  I understand your objections to assumptions matter could be conscious based on this model. I think it would be inaccurat because not all matter has six sense bases and the storehouse is itself an aggregate.

                  But we are matter, and we’re conscious, so the fundamental conditions are there in some simple form. The movement of electrons as you stated.

                  But fire is fire when it’s fire, and ash when it’s ash. Even if the potential is there we don’t say fire is already ash when it’s not.

                  • barsoap@lemm.ee
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    1 year ago

                    I think it would be inaccurate because not all matter has six sense bases and the storehouse is itself an aggregate.

                    The first five are basically one, in the sense that a blind or deaf person is not fundamentally less of a human than the rest of us. The model also misses some stuff, e.g. mere touch doesn’t include proprioception or sense of balance and if you read it as if it did (“body sense”) then why distinguish touch from e.g. sense of taste. Seventh I’d say is a subsystem (and so pervasive that the Stoics allow for both preferred and unpreferred indifferents – yes you can prefer pudding over gruel or the other way round just don’t think it’s a virtue), eighth is a stage of development, what you get when everything aligns well. The impression of a well-lubed machine.

                    I understand your objections to assumptions matter could be conscious based on this model. I think it would be inaccurate because not all matter has six sense bases and the storehouse is itself an aggregate.

                    I generally have no real idea of where to put the line. This stuff here might help, anything less than a T3 system can’t have experience of mind (they can’t learn to learn, which requires feeding back information about the changes in mind (for lack of better term) into the mind), OTOH that doesn’t mean that all T3 systems are actually integrating different sources, or balancing them: If you only ever were conscious of one aspect, there could be no conflict or interaction with another aspect, and thus consciousness would serve no role (and not evolve in the first place). It’s a matter of a required number of subsystems needing coordinating, and that coordinating itself having a necessary level of adaptiveness, be T3. Also I can authoritatively say that the human mind is not made to think about that kind of stuff. It’s all maps and models, direct knowledge fails I’m not sure the territory can even understand the question. Look, a squirrel!

      • bionicjoey@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        1 year ago

        I can prove to you ChatGPT doesn’t have a mind. Just open up the Sunday Times Cryptic Crossword and ask ChatGPT to solve and explain the clues.

        • OrderedChaos@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          ·
          1 year ago

          I’m confused by this idea. Maybe I’m just seeing it from the wrong point of view. If you asked me to do the same thing I would fail miserably.

          • KairuByte@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 year ago

            Not the original intent, but you’d likely immediately throw your hands up and say you don’t know, an LLM would hallucinate an answer.

          • bionicjoey@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 year ago

            But some humans can, since they require simultaneous understanding of words’ meanings as well as how they are spelled

            • General_Effort@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              1 year ago

              What should we conclude about most humans who cannot solve these crosswords?

              It should be relatively easy to train an LLM to solve these puzzles. I am not sure what that would show.

      • huginn@feddit.it
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        10
        ·
        1 year ago

        Well there are 2 options:

        Either I’m a real mind separate and independent of you or I’m a figment of your imagination.

        At which point you have to ask yourself: why are you so convinced you’re an unlovable and insufferable twat?

    • _NoName_@lemmy.ml
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      I don’t think most people will care, so long as their NPC interaction ends up compelling. We’ve been reading stories about people who don’t exist for centuries, and that’s stopped no one from sympathizing with them - and now there’s a chance you could have an open conversation with them.

      Like, I think alot of us assume that we care about the authors who write the character dialogs but I think most people actually choose not to know who is behind their favorite NPCs to preserve some sense that the NPC personality isn’t manufactured.

      Combine that with everyone becoming steadily more lonely over the years, and I think AI-generated NPC interactions are going to take escapism to another level.

        • _NoName_@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Lol, yeah. If generative AI text stays as shitty as it is now, then this whole discussion moot. Whether that will be the case has yet to be seen. What is an indisputable fact, though, is that right now is the worst that generative AI will ever be again. It’s only able to improve from here.

          • Barbarian@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            It’s only able to improve from here.

            That isn’t actually true. With the rise in articles, posts and comments written by these algorithms, experts are warning about model collapse. Basically, the lack of decent human-written training data will destroy future generative AI before it can even start.

            • _NoName_@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              That’s an interesting point. We are seeing a similar kind of issue with search engines losing effectiveness due to search engine optimization on websites.

              So it is possible that generative AI will become enshittened.

    • MxM111@kbin.social
      link
      fedilink
      arrow-up
      11
      arrow-down
      11
      ·
      1 year ago

      While it is not alive, whether it is a mind is not a clear cut. It can be called kind of a mind, a mind different of that of human.

      • Corgana@startrek.website
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Sorry you’re getting downvoted, you’re correct. It’s not implausible to assume that generative AI systems may have some kind of umwelt, but it is highly implausible to expect that it would be anything resembling that of a human (or animal). I think people are getting hung up on it because they’re assuming a lack of understanding language implies a lack of any concious experience. Humans do lots of things without understanding how they might be understood by others.

        To be clear, I don’t think these systems have experience, but it’s impossible to rule out until an actual robust theory of mind comes around.

      • huginn@feddit.it
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        4
        ·
        1 year ago

        Unless you want to call your predictive text on your keyboard a mind you really can’t call an LLM a mind. It is nothing more than a linear progression from that. Mathematically proven to not show any form of emergent behavior.

        • Kogasa@programming.dev
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          No such thing has been “mathematically proven.” The emergent behavior of ML models is their notable characteristic. The whole point is that their ability to do anything is emergent behavior.

          • huginn@feddit.it
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            edit-2
            1 year ago

            Here’s a white paper explicitly proving:

            1. No emergent properties (illusory due to bad measures)
            2. Predictable linear progress with model size

            https://arxiv.org/abs/2304.15004

            The field changes fast, I understand it is hard to keep up

            • Kogasa@programming.dev
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              1 year ago

              Sure, if you define “emergent abilities” just so. It’s obvious from context that this is not what I described.

                • Kogasa@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  1 year ago

                  Their paper uses terminology that makes sense in context. It’s not a definition of “emergent behavior.”

        • MxM111@kbin.social
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          edit-2
          1 year ago

          I do not think that it is “linear” progression. ANN by definition is nonlinear. Neither I think anything is “mathematically proven”. If I am wrong, please provide a link.

            • MxM111@kbin.social
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              1 year ago

              Thank you. This paper though does not state that there are no emergent abilities. It only states that one can introduce a metric with respect to which the emergent ability behaves smoothly and not threshold-like. While interesting, it only suggests that things like intelligence are smooth functions, but so what? Some other metrics show exponential or threshold dependence and whether the metric is right depends only how one will use it. And there is no law that emerging properties have to be threshold like. Quite the opposite - nearly all examples in physics that I know, the emergence appears gradually.

        • General_Effort@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          1 year ago

          It is obvious that you do not know what either “mathematical proof” or “emergence” mean. Unfortunately, you are misrepresenting the facts.

          I don’t mean to criticize your religious (or philosophical) convictions. There is a reason people mostly try to keep faith and science separate.

          • huginn@feddit.it
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            1 year ago

            Here’s a white paper explicitly proving:

            No emergent properties (illusory due to bad measures)

            Predictable linear progress with model size

            https://arxiv.org/abs/2304.15004

            The field changes fast, I understand it is hard to keep up

              • huginn@feddit.it
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                1 year ago
                1. Emergence is the whole being greater than the sum of its parts. That’s the original meaning of emergent properties, which is laid out in the first paragraph of the article. It’s the scholarly usage as well, and what the claims of observed emergence are using as the base of their claim.

                2. The article very explicitly demonstrated that only about 10% of any of the measures for LLMs displayed any emergence and that illusory emergence was the result of overly rigid metrics. Swapping to edit distance as an approximately close metric causes the sharp spikes to disappear for obvious reasons: no longer having a sharp yes/no allows for linear progression to reappear. It was always there, merely masked by flawed statistics.

                If you can’t be bothered to read here’s a very easy to understand video by one of the authors: https://www.youtube.com/watch?v=ypKwNrmuuPM

      • Bernie_Sandals@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        If you cut out a tiny bit of someone’s brain and then hooked it up to a cpu, would it be a mind? No, of course not, lol. Even if we got Biocomputers to work, we still wouldn’t have any synthetic hardware even close to being strong or fast enough to actually create or even simulate a brain.