Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

  • fidodo@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    8 months ago

    I really don’t think that LLMs can be constituted as intelligent any more than a book can be intelligent. LLMs are basically search engines at the word level of granularity, it has no world model or world simulation, it’s just using a shit ton of relations to pick highly relevant words based on the probability of the text they were trained on. That doesn’t mean that LLMs can’t produce intelligent results. A book contains intelligent language because it was written by a human who transcribed their intelligence into an encoded artifact. LLMs produce intelligent results because it was trained on a ton of text that has intelligence encoded into it because they were written by intelligent humans. If you break down a book to its sentences, those sentences will have intelligent content, and if you start to measure the relationship between the order of words in that book you can produce new sentences that still have intelligent content. That doesn’t make the book intelligent.

    • Jojo@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      8 months ago

      But you don’t really “know” anything either. You just have a network of relations stored in the fatty juice inside your skull that gets excited just the right way when I ask it a question, and it wasn’t set up that way by any “intelligence”, the links were just randomly assembled based on weighted reactions to the training data (i.e. all the stimuli you’ve received over your life).

      Thinking about how a thing works is, imo, the wrong way to think about if something is “intelligent” or “knows stuff”. The mechanism is neat to learn about, but it’s not what ultimately decides if you know something. It’s much more useful to think about whether it can produce answers, especially given novel inquiries, which is where an LLM distinguishes itself from a book or even a typical search engine.

      And again, I’m not trying to argue that an LLM is intelligent, just that whether it is or not won’t be decided by talking about the mechanism of its “thinking”

      • intensely_human@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        ·
        8 months ago

        We can’t determine whether something is intelligent by looking at its mechanism, because we don’t know anything about the mechanism of intelligence.

        I agree, and I formalize it like this:

        Those who claim LLMs and AGI are distinct categories should present a text processing task, ie text input and text output, that an AGI can do but an LLM cannot.

        So far I have not seen any reason not to consider these LLMs to be generally intelligent.

        • GiveMemes@jlai.lu
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          8 months ago

          Literally anything based on opinion or creating new info. An AI cannot produce a new argument. A human can.

          It took me 2 seconds to think of something LLMs can’t do that AGI could.

    • intensely_human@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      8 months ago

      What do you mean it has no world model? Of course it has a world model, composed of the relationships between words in language that describes that world.

      If I ask it what happens when I drop a glass onto concrete, it tells me. That’s evidence of a world model.

      • fidodo@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        edit-2
        8 months ago

        A simulation of the world that it runs to do reasoning. It doesn’t simulate anything, it just takes a list of words and then produces the next word in that list. When you’re trying to solve a problem, do you just think, well I saw these words so this word comes next? No, you imagine the problem and simulate it in both physical and abstract terms to come up with an answer.

      • EpeeGnome@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        8 months ago

        I can see the argument that it has a sort of world model, but one that is purely word relationships is a very shallow sort of model. When I am asked what happens when a glass is dropped onto concrete, I don’t just think about what I’ve heard about those words and come up with a correlation, I can also think about my experiences with those materials and with falling things and reach a conclusion about how they will interact. That’s the kind of world model it’s missing. Material properties and interactions are well enough written about that it ~~simulates ~~ emulates doing this, but if you add a few details it can really throw it off. I asked Bing Copilot “What happens if you drop a glass of water on concrete?” and it went into excruciating detail about how the water will splash, mentions how it can absorb into it or affect uncured concrete, and now completely fails to notice that the glass itself will strike the concrete, instead describing the chemistry of how using “glass (such as from the glass of water)” as aggregate could affect the curing process. Having a purely statistical/linguistic world model leaves some pretty big holes in its “reasoning” process.