Google to pause Gemini AI image generation after refusing to show White people.::Google will pause the image generation feature of its artificial intelligence model, Gemini, after the model refused to show images of White people when prompted.

  • j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    9
    ·
    10 months ago

    So what. It means they overtrained, deployed, and had to choose between reverting to a model with known issues or training a new model. They probably tried a temporary fix with a LoRA and it failed so they have to wait on the next big version to finish training and those can take weeks even on massive data center class hardware.

    People don’t seem to have any fundamental understanding of AI here. It is all static tensor math. There is no persistence or learning inside the model. Any illusion of persistence is due to the loader code that turns your text into math tokens. That is just standard code.

    There is no fundamental difference between an offline AI and the proprietary like Gemini. One loader code is just data mining while the other is not. Training has a sweet spot. If too much John Oliver is added, everything will generate as John Oliver, like absolutely everything.

    • Virulent@reddthat.com
      link
      fedilink
      English
      arrow-up
      47
      arrow-down
      2
      ·
      10 months ago

      No, the problem is that they filter prompts and inject new parameters into prompts specifically to avoid creating white subjects. It’s so bad that, when asked to generate a chessboard, Gemini would only make one with black pieces.

      • j4k3@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        8
        ·
        10 months ago

        That would not have caused them to go offline. Modifying a hash table takes 0 minutes of down time. Likewise a LoRA layer takes no down time. The only reason to go completely offline is because they need to filter the base dataset and retrain from scratch. It means the error is so intertwined across so many neural layers that a simple extra filter layer is unable to address it.

        The neural network is like a giant multi dimensional cloud in 3d but where there are more than 3 dimensions. All the stuff in the cloud are vector relationships. If there is some easily traversed path where neural connections are gravitating towards a simple modification like slice across that cloud can modify that easily traversed path ever so slightly to make it less easily traversed. This is something like a LoRA that can be tacked onto the model’s math.

        However, if the undesirable behavior is due to something like all roads leading to the center of a giant city metropolis, no slice across that cloud can subtly alter all of the neural paths without impacting adjacent data. It is all approximated floating point math where every concept and generation parameter is inner related. Things like bunny rabbit and Playboy playmate are stored in the same tables. If you try and make all bunny rabbits black, you are also altering all playmates. It is simply because there is an minor relationship between these concepts and therefore they share a vector space inside some tensor tables. There is a very big difference between how the initial table values are created across all layers and how a modified layer works. When things go really bad, the only option is to retrain the whole thing from scratch.

    • slacktoid@lemmy.ml
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      2
      ·
      10 months ago

      Theres no such thing as too much John Oliver. this guy doesnt know what they are talking about.