One of my earliest blog posts was about John Searle’s Chinese Room thought experiment and what it means to ‘understand’ something. The old blog is now defunct but the argument still stands.
Searle noted that any computer programme could, in principle if not practicality, be written out in a book. I imagine something like a choose-your-own adventure book — “If you see this symbol, tun to page 19987” etc. This applies to any programme, including the programme controlling an Artificial Intelligence (AI) system that produces human-like responses to queries. In 1950, the mathematician and code-breaker, Alan Turning, proposed a test that has come to be known as the ‘Turing Test’. If a person conversing with a computer through text could not distinguish it from another person, then the computer would pass this test.
Turing proposed his test because it is otherwise hard to define what ‘thinking’ or ‘consciousness’ are. The Turing Test provides a standard. Searle argued that this did not, however, represent a standard that equated to a computer gaining consciousness. He imagined a programme that responded to Chinese questions in Chinese and he imagined it written out in a book. He pictured himself in a closed room, taking in questions, using the book to process these symbols and then generating a response. Although a Chinese speaker conversing with him in this way may think he understands Chinese, he does not.
Here is where the AI system ChatGPT takes over the story:
Me: Explain how John Searle's Chinese Room thought experiment applies to AI systems such as ChatGPT.
ChatGPT: John Searle's Chinese Room thought experiment is a thought experiment that aims to show that a computer program, no matter how sophisticated, cannot truly understand or have a conscious experience. In the thought experiment, a person is placed in a room and given a set of rules for manipulating Chinese symbols. The person is not fluent in Chinese, but can follow the rules to generate appropriate responses to Chinese input. From the outside, it appears as though the person is able to understand and generate Chinese, but in reality, they have no understanding of the meaning of the symbols they are manipulating.
The Chinese Room thought experiment can be applied to AI systems like ChatGPT in the following way: ChatGPT is a large language model that has been trained on a vast amount of text data. It can generate responses to input text that are appropriate and sometimes even convincingly human-like. However, it does not have a conscious experience or true understanding of the meaning of the words it is using. It is simply following a set of rules and algorithms, much like the person in the Chinese Room thought experiment.
In other words, ChatGPT may be able to produce responses that are indistinguishable from those of a human, but it does not have the same level of understanding or consciousness as a human. It is simply following a set of pre-determined rules and is not capable of true understanding or consciousness.
Arguably, ChatGPT passes the Turing Test. So, is it conscious? I don’t think so and I believe this has practical implications for education and poses us a serious question, the answer to which could see us head in one of two very different directions.
Keep reading with a 7-day free trial
Subscribe to Filling The Pail to keep reading this post and get 7 days of free access to the full post archives.