

Kurzgesagt recently did an awesome video on this sort of thing!
Kurzgesagt recently did an awesome video on this sort of thing!
What I tried to say is that if the LLM doesn’t actually understand anything it says, it’s not actually intelligent is it? Inputs get astonishingly good outputs, but it’s not real AI.
I’m of the idea that AI will eventually “emerge” from ongoing efforts to produce it, so to say if there’s anything that can “change” about current AI as people call it now is somewhat moot.
I think LLMs are a dead end to producing real AI though, but no one REALLY knows cause it just hasn’t happened yet.
Not recommending anyone buy crypto, but I’ve been following Qubic and find it really interesting. Its anyone’s guess what organization will create real AI though.
I want to say then that probably counts as intelligence, as you can converse with LLMs and have really insightful discussions with them, but I personally just can’t agree that they are “intelligent” given that they do not understand anything they say.
I’m unsure if you’ve read of the Chinese Room but Wikipedia has a good article on it
I think that it’s important to consider that language can evolve or mean different things over time. Regarding “artificial intelligence,” this is really just a renaming of machine learning algorithms. They definitely do seem “intelligent” but having intelligence and seemingly having it are two different things.
Currently, real AI, or what is being called “Artificial General Intelligence,” doesn’t exist yet.
How are you defining intelligence anyways?
Missed this last week! Glad to have it back.
I didn’t even notice the cat ears on the first look.
No comments but just wanted to say thanks for posting. Appreciated this in my all feed.
This gets really deep into how we’re all made of not alive things and atoms and yet here we are, and why is it no other planet has life like us etc. Also super philosophical!
But truly, the LLMs don’t understand things they say, and Apple apparently just put out a paper saying they don’t reason either (if you consider that to be different from understanding). They’re claiming it’s all fancy pattern recognition. (Putting link below of interested)
https://machinelearning.apple.com/research/illusion-of-thinking
Another difference between a human and an LLM is likely the ability to understand semantics within syntax, rather than just text alone.
I feel like there’s more that I want to add but I can’t quite think of how to say it so I’ll stop here.