• Artyom@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 hours ago

    This is a solvable problem, but it requires humans to write the algorithms. For example, AIs can’t add, but there are ways to hook in external software that can do addition, which the AI will know to use. Similarly, we can train AI to solve logic puzzles if you give it an algorithm, but it can’t solve a logic puzzle an algorithm cannot.

  • Lvxferre [he/him]@mander.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    12 hours ago

    It’s relevant to note that this sort of inferential logic is essential to language; we [humans] use this all the time, and we expect each other to use it. It is a necessary part for a language model; otherwise you have a grammar bot instead.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    13 hours ago

    Meh. They can not do everything in one shot. But we don’t do that. We have thinking/reasoning models these days. And those theoretical limitations don’t appy there. So it’s quite the opposite from the headline. We’re beginning to overcome fundamental limitations.