Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

  • rambaroo@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    4
    ·
    edit-2
    8 months ago

    It isn’t reasoning about anything. A human did the reasoning at some point, and the LLM’s dataset includes that original information. The LLM is simply matching your prompt to that training data. It’s not doing anything else. It’s not thinking about the question you asked it. It’s a glorified keyword search.

    It’s obvious you have no idea how LLMs work at a fundamental level, yet you keep talking about them like you’re an expert.