One of Radiolab’s older episodes “Talking to Machines” talks about programs that try to simulate conversations. The classic Turing test is that if you are speaking to a robot and a human, and you cannot determine which is human or which is a robot (or getting the classification wrong) then the robot/computer can be said to be intelligent.
Now, I would love to argue a different definition of intelligence, but that isn’t really what this post is about. One of the conclusions from the owner of Clever Bot (a fascinating program that saves whatever you write to it and then is coded to spit back a correlated response -from the pool of existing phrases) is that thinking intensely about AI and conversations is how complex it is to sit in front of one another and have a conversation. The reasoning goes, because it is so difficult to code a program to converse ‘like a human’, then our “coding” must be complex also. It falls in line with metaphors about the brain, giving it ostensible plausibility.
However, haven’t we reversed the reasoning here? Aren’t we assuming in the first place that brains/humans are computational machines, and so it should be possible to code a program for something humans do? It started out the opposite… That we were trying to have the computer behave like a human, not the other way around! Could it not be so that coding a program to converse ‘like a human’ is so difficult, because humans actually aren’t like computers?
If humans aren’t computational machines, then trying to code a program for something that isn’t written as “software in our brain hardware”, is going to be very difficult indeed. But going from the latter to the former is a potentially fallacious way of thinking about it. You could even make the case that conversations aren’t that complex… After all, they are wholly ubiquitous in our everyday lives! We might argue that there are successful and failed conversations, but I’d say they are conversations nonetheless, and very human.