Article Image

IPFS News Link • Robots and Artificial Intelligence

ChatGPT Can't Think--Consciousness Is Something Entirely Different to Today's AI

• https://singularityhub-com, Philip Goff

These systems can produce text that seems to display thought, understanding, and even creativity.

But can these systems really think and understand? This is not a question that can be answered through technological advance, but careful philosophical analysis and argument tell us the answer is no. And without working through these philosophical issues, we will never fully comprehend the dangers and benefits of the AI revolution.

In 1950, the father of modern computing, Alan Turing, published a paper that laid out a way of determining whether a computer thinks. This is now called "the Turing test." Turing imagined a human being engaged in conversation with two interlocutors hidden from view: one another human being, the other a computer. The game is to work out which is which.

If a computer can fool 70 percent of judges in a 5-minute conversation into thinking it's a person, the computer passes the test. Would passing the Turing test—something that now seems imminent—show that an AI has achieved thought and understanding?

Chess Challenge

Turing dismissed this question as hopelessly vague, and replaced it with a pragmatic definition of "thought," whereby to think just means passing the test.

Turing was wrong, however, when he said the only clear notion of "understanding" is the purely behavioral one of passing his test. Although this way of thinking now dominates cognitive science, there is also a clear, everyday notion of "understanding" that's tied to consciousness. To understand in this sense is to consciously grasp some truth about reality.


Free Talk Live