Why ChatGPT Isn’t Truly “Intelligent”?
Why ChatGPT Isn’t Truly “Intelligent”? Because It Doesn’t Care About Our Lived World?
- Why Enterprise RAID Rebuilding Succeeds Where Consumer Arrays Fail?
- Linus Torvalds Rejects MMC Subsystem Updates for Linux 7.0: “Complete Garbage”
- The Man Who Maintained Sudo for 30 Years Now Struggles to Fund the Work That Powers Millions of Servers
- How Close Are Quantum Computers to Breaking RSA-2048?
- Why Windows 10 Users Are Flocking to Zorin OS 18 Instead of Linux Mint?
- How to Prevent Ransomware Infection Risks?
- What is the best alternative to Microsoft Office?
Why ChatGPT Isn’t Truly “Intelligent”? Because It Doesn’t Care About Our Lived World?
A new paper argues that artificial intelligence systems like ChatGPT fundamentally differ from human intelligence due to a lack of embodiment and understanding, highlighting that AI does not engage with and connect to our world in the same way humans do.

Anthony Chemero, a professor of philosophy and psychology at the University of Cincinnati, recently published a paper explaining the contrasting thinking patterns between artificial intelligence and human cognition.
The rise of artificial intelligence has sparked varied reactions from tech executives, government officials, and the general public. While many are enthusiastic about AI technologies like ChatGPT, viewing them as beneficial tools capable of transforming society, others express concerns about any technology labeled as “intelligent” potentially surpassing human control.
Chemero argues that linguistic confusion clouds people’s understanding of artificial intelligence. Although AI indeed possesses intelligence, it cannot have intelligence in the same way humans do, even if “it can lie and talk nonsense like its developers,” he notes.
The paper emphasizes that artificial intelligence, such as ChatGPT, is a large language model (LLM) trained on vast datasets mined from the internet, much of which carries the biases of the data contributors.
“LLMs generate impressive text, but it is often fabrications,” he says. “They learn to generate grammatically correct sentences but require much more training than humans. They don’t actually know the meaning of what they say,” he adds. “LLMs differ from human cognition because they are not embodied.”
The actions of LLMs, described by their creators as “hallucinations,” are better termed as “talking nonsense,” according to Chemero. LLMs construct sentences by repetitively adding the statistically most likely next word, without knowing or caring whether what they say is true.
As a result, with slight encouragement, AI tools can produce “vulgar statements with racist, sexist, and other biased content.”
Factors of Human Intelligence
Chemero’s paper aims to emphasize that LLMs lack intelligence in the human sense because humans are embodied. Living individuals are constantly surrounded by other humans, as well as material and cultural environments.
“This makes us care about our survival and the world we live in,” he points out. LLMs, on the other hand, do not truly inhabit this world and do not care about anything.
The main revelation is that LLMs are not as intelligent as humans because they “simply don’t care,” says Chemero. “Things matter to us. We are committed to survival. We care about the world we live in.”