- cross-posted to:
- kintelligenz@feddit.org
- cross-posted to:
- kintelligenz@feddit.org
This is a solvable problem, but it requires humans to write the algorithms. For example, AIs can’t add, but there are ways to hook in external software that can do addition, which the AI will know to use. Similarly, we can train AI to solve logic puzzles if you give it an algorithm, but it can’t solve a logic puzzle an algorithm cannot.
It’s relevant to note that this sort of inferential logic is essential to language; we [humans] use this all the time, and we expect each other to use it. It is a necessary part for a language model; otherwise you have a grammar bot instead.
Meh. They can not do everything in one shot. But we don’t do that. We have thinking/reasoning models these days. And those theoretical limitations don’t appy there. So it’s quite the opposite from the headline. We’re beginning to overcome fundamental limitations.