Probably should’ve just asked Wolfram Alpha

  • Deebster@programming.dev
    link
    fedilink
    English
    arrow-up
    57
    arrow-down
    4
    ·
    edit-2
    28 days ago

    Google’s AI seems dumber than the rest, for example here’s Kagi answering the same (using Claude):


    edit: typoed question originally

    Perhaps Google’s tried to make it run too cheaply - Kagi’s one doesn’t run unless you ask for it, and as a paid product it’ll have different priorities.

    • jbrains@sh.itjust.works
      link
      fedilink
      arrow-up
      25
      arrow-down
      1
      ·
      edit-2
      28 days ago

      There are two meanings being conflated here.

      “1/3 more” can mean “+ 1/3” or "* (1 + 1/3)“.

      So “1/3 more than 1/3” could be 2/3 or 4/9, but not 1/2.

      Instead 1/2 is 1/2 more than 1/3, not 1/3 more. That’s the meme I’ve seen go around recently.

      • Deebster@programming.dev
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        28 days ago

        Yes, and the Google AI response is correct (and quite clear) in what it says. edit: Thanks Batman. I mean that Google’s understanding of the question is logical (although still the maths is wrong as you say (now I’ve re-read you)) and its answer explained the angle it was answering from.

        However, I think the reasonable assumption for the intention behind the question is relative to a whole. I had third of a pizza, and now I have an extra sixth of a pizza. It’s subtle, but that’s the kind of thing AI falls down on.

        • jbrains@sh.itjust.works
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          28 days ago

          I agree with your assessment regarding the intention of the phrase. We’re back at the silly arithmetic meme that hinges on not grouping terms explicitly and watching people yell at each other in the mistaken belief that there’s one authoritative interpretation of an ambiguous string of symbols.

          Still, the actual mistake remains. Why an extra 1/6 of the pizza? 1/3 of 1/3 is 1/9, not 1/6. That’s 1/2 of 1/3.

          • Deebster@programming.dev
            link
            fedilink
            English
            arrow-up
            4
            ·
            28 days ago

            I thought we were finally agreeing fully! My understanding of the question is “what is the difference between a third (of a pizza, say) and a half?”

            1/2 - 1/3 = 1/6
            1/2 = 1/3 + 1/6
            a half is one sixth more than a third.

            btw, I fixed my Kagi screenshot since I’d missed a word from the question (reading comprehension’s clearly not my strong point today)

        • BatmanAoD@programming.dev
          link
          fedilink
          arrow-up
          2
          ·
          28 days ago

          You are saying “yes” to a comment explaining why the Google AI response cannot possibly be correct, so what do you mean “and [it’s] correct”?

          • Deebster@programming.dev
            link
            fedilink
            English
            arrow-up
            3
            ·
            28 days ago

            Ah, you’re right - I misunderstood jbrain’s point to just be about the “relative to the original” understanding. Guess I’m no smarter than Google’s AI.

    • bulwark@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      28 days ago

      Kagi has Claude built in? I’ve been using it for a year and didn’t know that.

      • xigoi@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        28 days ago

        It tries to auto-determine when to trigger, but you can explicitly trigger it by putting a question mark after your query.

      • stetech@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        27 days ago

        This is why Kagi is a great company.

        Nobody is getting LLM functionality shoved in their faces unless they wanted to.

    • Swedneck@discuss.tchncs.de
      link
      fedilink
      arrow-up
      2
      ·
      21 days ago

      this is why i like the DDG approach: don’t have the LLM try to reason, just have it pull information from sources you’ve checked aren’t completely insane, and summarize an answer from there.