Is the Turing test an acceptable test to prove existence of conscience in a machine?

There is no doubt that the matter of intelligence and consciousness in AI has been subject of debate for a long time – from Descartes, who claimed no machine could behave like a human, to complete opposite Marvin Minsky, who claimed that one’s common sense can potentially be encoded into a machine, given enough computing power to many other philosophers such as Searle and Dennett. In this essay, our objective is to analyse whether the Turing test is a suitable test for intelligence, determine a relation between intelligence and consciousness, and then address the question of whether the Turing test can prove the existence of consciousness in machines.

Before we can start our analysis, we must first understand what the Turing test is.  The Turing test was described in one of Alan Turing’s papers “computing machinery and intelligence”. Turing, often referred to as the father of modern computing, was highly influential in the development of computer science, formalising concepts of algorithms and computation with the Turing machine. In the 50s, he dedicated his work towards a more abstract stream; in particular, in the paper referred, he addresses the problem of artificial intelligence and proposes a way to evaluate whether a machine is intelligent or not.

In his paper, Turing rephrases the question “Can machines think?” as whether “machines can do what we, as thinking entities, can do[1]. Turing had the need to reformulate the question to overcome the vagueness of the definition of “thinking”. Additionally, Turing defines machines as digital machinery – “machines which manipulate the binary digits 1 or 0[1]. This is because he wants to make clear he is addressing machines that already exist and not synthetic biological material. He further claims “it is expected that they [computers] can simulate the behaviour of any other digital machine, given enough memory and time[1].

The Turing test itself is set up in the form of a Question and Answer game. There is a person, the judge, who asks questions to two entities. Normally, this game is played with two additional humans behind smoked glass, but in the Turing test, one of the humans is replaced by a computer. If the computer fools the player into thinking he is talking to a human being, the machine has successfully passed the Turing test and therefore, it is a thinking entity.

Now we must ask ourselves, can the Turing test prove intelligence? Rather, is a machine intelligent if it is able to, when receiving a stimulus (a question), produce answers in a way that sounds human? Philosophical movements such as Functionalism claim that mental states (e.g. beliefs, desires, emotions) consist of their functional role and that they are causal relations to other mental states, sensory inputs and behavioural outputs. Thus the functionalist view agrees that mental states can be manifested not only in the brain but in other systems such as computers, given that the system performs the appropriate functions. In particular, Putnam’s Machine state functionalism goes further, claiming that the mind of any creature can be regarded as a Turing machine and thus, it can be fully specified by a set of instructions (a machine table).

But what does it mean to pass the Turing test? Is passing the Turing test a necessary condition for intelligence, a sufficient condition or does it only provide a probability that said machine is intelligent?

If we claim that passing the Turing test is a necessary condition, this would mean that the only way something can be considered intelligent is by passing the Turing test. Recalling the test, we know that we have people, the judges, who decide whether the machine fooled them or not. We must also note that the mental capacity or computer literacy of the judge plays a big role in whether the machine will or won’t be successful in passing the test. This excludes the consideration of any intelligence that a human cannot recognise. With this, Turing must weaken his claim to: passing the Turing test is a sufficient condition for intelligence. Thus, if a machine passes the test then it must be intelligent, but it doesn’t mean that if a machine lacks intelligence if it fails the test.

But even with this sufficient condition, Turing is faced with a strong counter argument by Ned Block, called the “Blockhead argument”. Block aims to show that even if a machine passes the Turing test, it doesn’t necessarily need to be intelligent. He idealises the following machine:

–  Consider the ultimate in unintelligent Turing Test passers, a hypothetical machine that contains all conversations of a given length in which the machine’s replies make sense.(…) Since there is an upper bound on how fast a human typist can type, and since there are a finite number of keys on a teletype, there is an upper bound on the “length” of a Turing Test conversation. Thus there are a finite number of different Turing Test conversations, and there is no contradiction in the idea of listing them all. (…) The programmers start by writing down all typable strings, call them A1…An. Then they think of just one sensible response to each of these, which we may call B1…Bn. (…) Think of conversations as paths downward through a tree, starting with an Ai from the judge, a reply, Bi from the machine, and so on. [2]

Although this machine passes the Turing test, we know it is simply matching a set of predefined questions with set defined answers. Knowing how it operates, it doesn’t seem like this machine is dotted with intelligence, or at least not with the same level that we’d consider human intelligence. However, we can still attack the “Blockhead argument” by claiming that the machine he talks about is impossible to build or that, in fact, this machine is in some sense, intelligent.

Another famous counter example is the called “Chinese room argument”, given by Searle. He imagines a room with someone inside and a book of instructions. This person is given characters in Chinese and uses the book to reply to the different messages. This is done in such a way that when he sends out an answer, the people from the exterior will think the person inside the room can understand Chinese, which is not the case. With this, Searle claims that a computer executing a programme which receives inputs and just replies with pre-established answers would not understand the conversation either and therefore, the machine with the Turing table that passes the Turing test can’t be intelligent.

With these arguments, we have shown that passing Turing test is neither a necessary or sufficient condition for intelligence. We do not negate that it can give a probability of intelligence, but even this is hard to account for, as it is based on a subjective criteria of the judge being fooled.

The main question is, however, if the Turing test can prove consciousness, so before we proceed into a conclusion, we should determine how we can define consciousness. For example, we can’t deny that animals are conscious in a way, although different to what we would call human consciousness. We shouldn’t also fall into the superiority of expecting that everything is conscious in exactly the way we are. However, we can admit that consciousness as normally referred in philosophical literature is associated with having thoughts, a subjective view, self-awareness, being subject of conscious states, or, as Thomas Nagel defines it: “if there is “something that it is like” to be a creature, meaning there exists some subjective way the world seems or appears from the creature’s mental or experiential point of view[4]. So for instance, animals are conscious because there is something that it is like for said animal to experience through their senses, although humans might not understand from our human point of view. This suggests that there exists consciousness without human intelligence.

Both Daniel Dennett and Wittgenstein try to address human consciousness claiming that consciousness is based on language and that for consciousness to exist there must be infrastructure of perceptual, motor and intellectual skills in place already and that only with language and understanding language it is possible to think and to be conscious, respectively.

In all these different definitions for consciousness, we can observe that consciousness implies some sort of understanding, whether if it is knowing to be something (Nagel’s definition) or understanding language. But what if understanding only arises from living and experiencing things? Paraphrasing Heidegger; believing that minds are built on basic, atomic, context-free sets of facts and rules, with objects and predicates, and discrete storage and processing units makes no sense. The subject-object model of experience, where we situate ourselves separated from the world and others can’t describe experience in a correct manner. Being in the world is more primary than thinking and solving problems; it is not representational. Human consciousness is a product of living in a collective world and being completely emerged in a physical world. [3]

In other words, because it makes no sense to believe that our minds are simply built on context-free sets of facts, rules, concepts and discrete storage and processing units, we can’t expect a machine that is built with these suppositions to be able to have a mind itself, or at least a mind we can recognise as conscious.

The machines used are computers, void of the opportunity to interact in a world, relying on the programmer’s ability to give the machine intelligence, consciousness and a descriptive representation of the world. On the other hand, the test simply judges how well a machine can manipulate language and output sentences. The real problem with the test is that it doesn’t distinguish a brute-force coded machine from a machine that actually has some sort of learning capacity, therefore, even if a machine passes the Turing test, we can’t say it is either conscious or intelligent.

Furthermore, the criterion used in the Turing test – to be able to mimic a human being – seems flawed. Taking the analogy of Russell and Norvig regarding the history of flight, that says that planes were not tested on how well they flied compared to birds but on how well they performed themselves.[3]

We can argue that we can create a machine with a simulated world where the machine can interact and learn. In fact, it seems hard to deny that computers have no intelligence at all, especially with Big Blue beating Kasparov and with the best Jeopardy players being beaten by IBM’s Watson. But the fact remains that the Turing test is not specified in a satisfactory way: by relying on human’s criteria of appearing intelligent or conscious, as it doesn’t give us a solid method of determining intelligence or consciousness. Furthermore, as shown, it seems that the test simply isn’t sophisticated enough to filter out unintelligent machines. Both the blockhead argument and Chinese room argument suggest that the way something produces intelligent behaviour is fundamental to determine what is intelligent or not.

We do not reject the possibility of computers being conscious in the future, but we can establish that judging whether a being is conscious with the Turing test is irrelevant or invalid. The concept of self-awareness, preferences, existence makes up the human consciousness and there is no way that the Turing test can test for any of these, or distinguish between a very complete database of answers or a smart machine. While it is true that we can’t dismiss the possibility of existence of consciousness that is different from the one understood by a human being, we can safely say that the Turing test is not a valid test to prove consciousness.


[1] Turing, A.M. (1950). Computing machinery and intelligence. Mind, 59, 433-460.

[2] Block, Ned (1981). Psychologism and Behaviorism, New York University


[3] Arora, Namit. Philosophy now – The minds of machines, December (2011)


[4] Nagel, Thomas. What is it like to be a bat?” (1974)

Oppy, Graham and Dowe, David, “The Turing Test”, The Stanford Encyclopedia of Philosophy (Spring 2011 Edition), Edward N. Zalta (ed.),


Van Gulick, Robert, “Consciousness”, The Stanford Encyclopedia of Philosophy (Summer 2011 Edition), Edward N. Zalta (ed.),


Cole, David, “The Chinese Room Argument”, The Stanford Encyclopedia of Philosophy (Winter 2009 Edition), Edward N. Zalta (ed.)


McDermott, Drew “Artificial Intelligence and Consciousness”, Yale University (2007)


Silby, Brent. Wittgenstein: Meaning and Representation – What does he mean?, University of Canterbury(1998)


Tallis, Raymond. The Philosophers Magazine, Why minds are not computers (2004)




after i sat on the bus, driving away from her and this city that was once my home, i watched the rain fall on the windscreen for over an hour. the monotonous movement of the windscreen wipers going back and forth in an unfair battle against the continuously pouring rain brought me back to my childhood, on the back of my father’s car. there is something soothing about this repetition and the memory of when things were simple, potentials instead of concrete realisations.

how does 9 years of commitment to someone followed by a sudden crumbling down of a relationship materialise as? apparently, through 11 boxes of cardboard filled up to the brim, roughly 200 kgs, shipped across the continent.

i can still see her standing in the middle of our apartment, now half empty, hollow shelves and bare walls, as if the void i felt then had spilled onto the physical world. and i can still see her and her look as I walked away, followed by the brief glimpse i saw of myself, reflected on the door as I shut it – red faced, swollen eyed, messy hair – and such a hopeless expression, that for a fraction of a second, i felt genuine horror for the man i saw.


I was washing my hands yesterday and I saw a knife. The first thing I thought about was to jab it in my neck. But I guess growing up means you learn to manage these harmful behaviours. Like how I constantly want to drink and do heavy drugs, but limit myself to cigarettes. Like how, now I go for runs and eat healthily, go to bed at sensible hours and never abuse sleeping pills.

I guess, with growing up comes better control and judgement. It sure seems so. Taking responsibility of one’s actions.

Just like 2016’s theme was loneliness, a prevalent theme of 2017 was letting go. I will never fully understand other people. Without exception, I will never gain access to the internal vocabulary, states, and experiences of others, making it impossible to understand exactly what actually drives other’s actions and desires. Thus, spending too much time hypothesising why other’s actions is useless. Things happen, people do things, and that’s it. I learned that it’s useless to keep thinking about certain things and that it is ok to have unanswered questions.

This idea of letting go is not only with respect to people, but also to possibilities. One’s life is made of a series of choices and compromises. If one had infinite time, resources and energy, one could attend to all their wishes (or none as one could end up eternally debating between options), but the reality is different. You have a set of options and decisions have to be made. The trick is, decisions are made regardless of whether one makes conscious, deliberate ones.

I’m soon to be 26.

Phone notes

I got drunk and puked on the toilet. My friend told me to stop texting and appreciate reality – that’s when it’s most beautiful. For a moment, I felt like I was melting into the walls, into the sofa, into people, into conversations. Things felt at some sort of equilibrium – either the outside matched the chaos inside; or the inside was finally tamed down.

I screamed at someone obscenities in Greek, Portuguese and Italian. I have blurry pictures and a random footage of brilliant minds talking absolute shit. I love it when serious people are still able to generate huge amounts of absurdities.

I slept in a couch, with a pillow over my head so my back hurts but things felt ok. Not just then, temporarily, but also, the future felt ok. A good friend is leaving back to his home country, and it feels somewhat nice that I am sad about it.

Traditional Indian Prayer

When I am dead
Cry for me a little
Think of me sometimes
But not too much.
Think of me now and again
As I was in life
At some moments it’s pleasant to recall
But not for long.
Leave me in peace
And I shall leave you in peace
And while you live
Let your thoughts be with the living.

The Native American Ishi People Of The Pacific Northwest


– you will never understand others fully.

– nobody is waiting or holding a place for you.

– it’s not “in the end, you are alone”, it’s more like, you are always alone – so, I guess, cherish the few times and those who make you forget about this fact.

– you can’t unsee / unhear things. point proven below.


Where do kids hang out nowadays?

When I was growing up, online hangouts occurred on IRC, forums or altavista chatrooms. All these platforms had one thing in common: anonymity. The internet was objectively known to be a creepy place.

The first time I went online, I was around 9 years old, around the turn of the millennium. I registered a nickname on my country’s IRC network. Radark, I thought it sounded super badass. I convinced a group of people I was a 16 year old hacker. I had watched my older sibling play around with Netbus and Sub7 a few days ago, so seemed easy enough. Plus, the copied book we got sent by snail mail called “Hacking” gave me some street cred.

So off I went on one of my first online adventures, convincing a group of computer science enthusiasts or students that I was a 16 year old hacker. And it all went well for the whole time of 3 hours, until my sibling came back and told them Radark is a 9 year old kid.

Woopsy. Too bad, great thing I could just do /nick aNewName and off I go, another personality, others to fool, but more importantly, others to converse with. While growing up, I chatted to a lot of people. I did not have many friends in real life, so the internet was a great place. In particular, I could talk about anything without fearing the repercussion of my opinion not being accepted. And as easily as you could create a personality, you could erase it. And it was fine, nobody cared. Granted, you do /whois to figure out whether a nickname was registered or not. But that was pretty much as much as you would know about your conversation partner.

Nowadays, I don’t know where kids hang out anymore. I hope I am wrong, but I have the impression kids hang out mostly on Instagram or Snapchat (Facebook is for old people). And this is worrying because all these platforms have, in its core, the user-centric aspect of it. These are not primarily chatting platforms, despite being used as such. Indeed, they are broadcasting tools – the user is incentivized to share content, which ends up often being personal content (in the form of pictures, videos or others). This also means that changing/deleting profiles is actually an annoying process because you put some effort into these profiles. It also means that the barrier to randomly chatting to someone is now higher – it’s not longer whether the nickname has been registered or not, but whether the other person has shared some content themselves, has enough followers, etc.

I think in general, people don’t think much about the information that can be inferred from public social media posts. Indeed, the current state of the internet incentivizes users to not think about it: it’s cool, link your phone number to your facebook, to use your real name, share geotagged content, win points, whatever. All these academics publishing stuff on user privacy, they are tinfoil hat dudes.

Anyway, my hope is that I am completely wrong, and kids just use Discord now.