The problem with AI is that it does not understand anything. You can have a completely reasonable sounding conversation that is just full of stupidity and the AI does not know it because it does not no anything.
Another AI issue is it works until it does not and that failure can be rather severe and unexpected. Again because the AI knows nothing.
Seems like we need some test to address this. They are basically the same problem. Or maybe it is some training so that the AI can know what it does not know.
Understanding the general sanity of some of their responses. Synthesizing new ideas. Having a larger context. AI tends to be idiot savants on one hand and really mediocre on the other.
You could argue that this is just a reflection of lack of training and scale but I wonder.
You will change my mind when I have had a machine interaction where the machine does not seem like an idiot.
Most humans are not very intelligent either and many lack the ability to understand many things. We are not really thinking machines. We are emotional creatures that some times think. So I would not measure AI against the average human. That is a pretty low bar.
AI knows nothing and are just dumb correlation engines
Here’s a thought exercise, how do you “know”? How do you know your pet? LLMs like gpt can “know” about a dog in terms of words, because that’s what they “sense”, that’s how they interact with their “environment”. They understand words and how they relate to other words, basically words are their entire environment.
Now, can you describe how you know your dog without your senses, or anything derived from your senses? Remember, chemical receptors are “senses” too.
I remember reading about this awhile back but I don’t have the link on me: Did you know that people who were born blind but have their vision repaired years later don’t immediately know what “pointy” looks like? They never formed that correlation between the feeling of pointy and the visual of pointy the way that they could with the feeling and the word.
The problem with AI is that it does not understand anything. You can have a completely reasonable sounding conversation that is just full of stupidity and the AI does not know it because it does not no anything.
Another AI issue is it works until it does not and that failure can be rather severe and unexpected. Again because the AI knows nothing.
Seems like we need some test to address this. They are basically the same problem. Or maybe it is some training so that the AI can know what it does not know.
Define “understand” as you’re using it here? What exactly does the AI not do, that humans do, that comprises “understanding”?
Understanding the general sanity of some of their responses. Synthesizing new ideas. Having a larger context. AI tends to be idiot savants on one hand and really mediocre on the other.
You could argue that this is just a reflection of lack of training and scale but I wonder.
You will change my mind when I have had a machine interaction where the machine does not seem like an idiot.
Have you ever interacted with a human that seemed like an idiot? Do you think that person is incapable of understanding?
Most humans are not very intelligent either and many lack the ability to understand many things. We are not really thinking machines. We are emotional creatures that some times think. So I would not measure AI against the average human. That is a pretty low bar.
Here’s a thought exercise, how do you “know”? How do you know your pet? LLMs like gpt can “know” about a dog in terms of words, because that’s what they “sense”, that’s how they interact with their “environment”. They understand words and how they relate to other words, basically words are their entire environment.
Now, can you describe how you know your dog without your senses, or anything derived from your senses? Remember, chemical receptors are “senses” too.
I remember reading about this awhile back but I don’t have the link on me: Did you know that people who were born blind but have their vision repaired years later don’t immediately know what “pointy” looks like? They never formed that correlation between the feeling of pointy and the visual of pointy the way that they could with the feeling and the word.
My point is, we’re correlation machines too