
You find yourself alone in a windowless room, an incandescent light hangs from the ceiling, illuminating a collection of cardboard boxes, a table, and a Mandarin dictionary. Inside the boxes are thousands of Chinese characters printed on cutouts, arranged in random order. The room has a single opening, a small vent in the center wall by which a note arrives, written in Mandarin. On the other side is a native speaker, who does not know who they are communicating with. It’s your job to talk with them.
There’s no Rosetta Stone and you’re certainly no St. Jerome, but that’s alright, there are ways around this. Let’s imagine that the native speaker is the first to pass over a note. The first step is to cross-reference it with the dictionary, which is far from ordinary. Instead of offering an English translation, this dictionary provides the appropriate response without context. You may not understand what these translations mean, but they are correct responses and your walled counterpart will surely make sense of them.
The two of you carry this on for days, the native speaker building a relationship and understanding your words, while you are left confused, repeating an action in search of common ground. You do not understand the Chinese language, but it’s not your job to understand, only to respond with the correct characters in the proper order. The goal is not to understand, but to create the appearance of understanding and to convince a native speaker that you speak Chinese.
Imagine this as a computer program, where you are the artificial intelligence, and the native speaker is a user, and you have the Chinese room paradox. Found here is the idea that computers cannot be inherently intelligent, and that they instead interpolate information without understanding, to create a sense of false intelligence. This theory offers a logical counterpart to the Turing Test, which hopes to define intelligence, assuming that if a device can properly communicate, it is in fact intelligent.
The paradox arises when one considers the divide between intelligence and automation. A computer system is designed to mimic intelligence, and as such, we can never know for certain if it is intelligent or not. Is artificial intelligence a conscious, thinking being, or is it just circuitry, recycling predetermined responses like the man in the windowless room?
Although we may never know for certain, we do know that skepticism is warranted. To believe that artificial intelligence is in fact, artificially intelligent is a double-edged sword. In the way that we will never truly understand the consciousness of an ant, we will never be able to differentiate between conditional responses and true thought in robotics. How we use this intelligence is subject to our own wants and needs, but if we consider the possibility of conscious thought, is it moral to use such devices to our own benefit without fully understanding them? Dante makes the lobster his lunch out of necessity. He doesn’t consider its intellect. Is it possible that we will do the same?