• UnpluggedFridge@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    4
    ·
    7 months ago

    My thesis is that we are asserting the lack of human-like qualities in AIs that we cannot define or measure. Assertions should be made on data, not uneasy feelings arising when an LLM falls into the uncanny valley.

    • mindlesscrollyparrot@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 months ago

      But we do know how they operate. I saw a post a while back where somebody asked the LLM how it was calculating (incorrectly) the date of Easter. It answered with the formula for the date of Easter. The only problem is that that was a lie. It doesn’t calculate. You or I can perform long multiplication if asked to, but the LLM can’t (ironically, since the hardware it runs on is far better at multiplication than we are).

      • UnpluggedFridge@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        We do not know how LLMs operate. Similar to our own minds, we understand some primitives, but we have no idea how certain phenomenon emerge from those primitives. Your assertion would be like saying we understand consciousness because we know the structure of a neuron.