Is the Turing test an acceptable test to prove existence of conscience in a machine?

There is no doubt that the matter of intelligence and consciousness in AI has been subject of debate for a long time – from Descartes, who claimed no machine could behave like a human, to complete opposite Marvin Minsky, who claimed that one’s common sense can potentially be encoded into a machine, given enough computing power to many other philosophers such as Searle and Dennett. In this essay, our objective is to analyse whether the Turing test is a suitable test for intelligence, determine a relation between intelligence and consciousness, and then address the question of whether the Turing test can prove the existence of consciousness in machines.

Before we can start our analysis, we must first understand what the Turing test is.  The Turing test was described in one of Alan Turing’s papers “computing machinery and intelligence”. Turing, often referred to as the father of modern computing, was highly influential in the development of computer science, formalising concepts of algorithms and computation with the Turing machine. In the 50s, he dedicated his work towards a more abstract stream; in particular, in the paper referred, he addresses the problem of artificial intelligence and proposes a way to evaluate whether a machine is intelligent or not.

In his paper, Turing rephrases the question “Can machines think?” as whether “machines can do what we, as thinking entities, can do[1]. Turing had the need to reformulate the question to overcome the vagueness of the definition of “thinking”. Additionally, Turing defines machines as digital machinery – “machines which manipulate the binary digits 1 or 0[1]. This is because he wants to make clear he is addressing machines that already exist and not synthetic biological material. He further claims “it is expected that they [computers] can simulate the behaviour of any other digital machine, given enough memory and time[1].

The Turing test itself is set up in the form of a Question and Answer game. There is a person, the judge, who asks questions to two entities. Normally, this game is played with two additional humans behind smoked glass, but in the Turing test, one of the humans is replaced by a computer. If the computer fools the player into thinking he is talking to a human being, the machine has successfully passed the Turing test and therefore, it is a thinking entity.

Now we must ask ourselves, can the Turing test prove intelligence? Rather, is a machine intelligent if it is able to, when receiving a stimulus (a question), produce answers in a way that sounds human? Philosophical movements such as Functionalism claim that mental states (e.g. beliefs, desires, emotions) consist of their functional role and that they are causal relations to other mental states, sensory inputs and behavioural outputs. Thus the functionalist view agrees that mental states can be manifested not only in the brain but in other systems such as computers, given that the system performs the appropriate functions. In particular, Putnam’s Machine state functionalism goes further, claiming that the mind of any creature can be regarded as a Turing machine and thus, it can be fully specified by a set of instructions (a machine table).

But what does it mean to pass the Turing test? Is passing the Turing test a necessary condition for intelligence, a sufficient condition or does it only provide a probability that said machine is intelligent?

If we claim that passing the Turing test is a necessary condition, this would mean that the only way something can be considered intelligent is by passing the Turing test. Recalling the test, we know that we have people, the judges, who decide whether the machine fooled them or not. We must also note that the mental capacity or computer literacy of the judge plays a big role in whether the machine will or won’t be successful in passing the test. This excludes the consideration of any intelligence that a human cannot recognise. With this, Turing must weaken his claim to: passing the Turing test is a sufficient condition for intelligence. Thus, if a machine passes the test then it must be intelligent, but it doesn’t mean that if a machine lacks intelligence if it fails the test.

But even with this sufficient condition, Turing is faced with a strong counter argument by Ned Block, called the “Blockhead argument”. Block aims to show that even if a machine passes the Turing test, it doesn’t necessarily need to be intelligent. He idealises the following machine:

–  Consider the ultimate in unintelligent Turing Test passers, a hypothetical machine that contains all conversations of a given length in which the machine’s replies make sense.(…) Since there is an upper bound on how fast a human typist can type, and since there are a finite number of keys on a teletype, there is an upper bound on the “length” of a Turing Test conversation. Thus there are a finite number of different Turing Test conversations, and there is no contradiction in the idea of listing them all. (…) The programmers start by writing down all typable strings, call them A1…An. Then they think of just one sensible response to each of these, which we may call B1…Bn. (…) Think of conversations as paths downward through a tree, starting with an Ai from the judge, a reply, Bi from the machine, and so on. [2]

Although this machine passes the Turing test, we know it is simply matching a set of predefined questions with set defined answers. Knowing how it operates, it doesn’t seem like this machine is dotted with intelligence, or at least not with the same level that we’d consider human intelligence. However, we can still attack the “Blockhead argument” by claiming that the machine he talks about is impossible to build or that, in fact, this machine is in some sense, intelligent.

Another famous counter example is the called “Chinese room argument”, given by Searle. He imagines a room with someone inside and a book of instructions. This person is given characters in Chinese and uses the book to reply to the different messages. This is done in such a way that when he sends out an answer, the people from the exterior will think the person inside the room can understand Chinese, which is not the case. With this, Searle claims that a computer executing a programme which receives inputs and just replies with pre-established answers would not understand the conversation either and therefore, the machine with the Turing table that passes the Turing test can’t be intelligent.

With these arguments, we have shown that passing Turing test is neither a necessary or sufficient condition for intelligence. We do not negate that it can give a probability of intelligence, but even this is hard to account for, as it is based on a subjective criteria of the judge being fooled.

The main question is, however, if the Turing test can prove consciousness, so before we proceed into a conclusion, we should determine how we can define consciousness. For example, we can’t deny that animals are conscious in a way, although different to what we would call human consciousness. We shouldn’t also fall into the superiority of expecting that everything is conscious in exactly the way we are. However, we can admit that consciousness as normally referred in philosophical literature is associated with having thoughts, a subjective view, self-awareness, being subject of conscious states, or, as Thomas Nagel defines it: “if there is “something that it is like” to be a creature, meaning there exists some subjective way the world seems or appears from the creature’s mental or experiential point of view[4]. So for instance, animals are conscious because there is something that it is like for said animal to experience through their senses, although humans might not understand from our human point of view. This suggests that there exists consciousness without human intelligence.

Both Daniel Dennett and Wittgenstein try to address human consciousness claiming that consciousness is based on language and that for consciousness to exist there must be infrastructure of perceptual, motor and intellectual skills in place already and that only with language and understanding language it is possible to think and to be conscious, respectively.

In all these different definitions for consciousness, we can observe that consciousness implies some sort of understanding, whether if it is knowing to be something (Nagel’s definition) or understanding language. But what if understanding only arises from living and experiencing things? Paraphrasing Heidegger; believing that minds are built on basic, atomic, context-free sets of facts and rules, with objects and predicates, and discrete storage and processing units makes no sense. The subject-object model of experience, where we situate ourselves separated from the world and others can’t describe experience in a correct manner. Being in the world is more primary than thinking and solving problems; it is not representational. Human consciousness is a product of living in a collective world and being completely emerged in a physical world. [3]

In other words, because it makes no sense to believe that our minds are simply built on context-free sets of facts, rules, concepts and discrete storage and processing units, we can’t expect a machine that is built with these suppositions to be able to have a mind itself, or at least a mind we can recognise as conscious.

The machines used are computers, void of the opportunity to interact in a world, relying on the programmer’s ability to give the machine intelligence, consciousness and a descriptive representation of the world. On the other hand, the test simply judges how well a machine can manipulate language and output sentences. The real problem with the test is that it doesn’t distinguish a brute-force coded machine from a machine that actually has some sort of learning capacity, therefore, even if a machine passes the Turing test, we can’t say it is either conscious or intelligent.

Furthermore, the criterion used in the Turing test – to be able to mimic a human being – seems flawed. Taking the analogy of Russell and Norvig regarding the history of flight, that says that planes were not tested on how well they flied compared to birds but on how well they performed themselves.[3]

We can argue that we can create a machine with a simulated world where the machine can interact and learn. In fact, it seems hard to deny that computers have no intelligence at all, especially with Big Blue beating Kasparov and with the best Jeopardy players being beaten by IBM’s Watson. But the fact remains that the Turing test is not specified in a satisfactory way: by relying on human’s criteria of appearing intelligent or conscious, as it doesn’t give us a solid method of determining intelligence or consciousness. Furthermore, as shown, it seems that the test simply isn’t sophisticated enough to filter out unintelligent machines. Both the blockhead argument and Chinese room argument suggest that the way something produces intelligent behaviour is fundamental to determine what is intelligent or not.

We do not reject the possibility of computers being conscious in the future, but we can establish that judging whether a being is conscious with the Turing test is irrelevant or invalid. The concept of self-awareness, preferences, existence makes up the human consciousness and there is no way that the Turing test can test for any of these, or distinguish between a very complete database of answers or a smart machine. While it is true that we can’t dismiss the possibility of existence of consciousness that is different from the one understood by a human being, we can safely say that the Turing test is not a valid test to prove consciousness.

REFERENCES

[1] Turing, A.M. (1950). Computing machinery and intelligence. Mind, 59, 433-460.

[2] Block, Ned (1981). Psychologism and Behaviorism, New York University

URL: http://www.nyu.edu/gsas/dept/philo/faculty/block/papers/Psychologism.htm

[3] Arora, Namit. Philosophy now – The minds of machines, December (2011)

URL: http://www.philosophynow.org/issue87/The_Minds_of_Machines

[4] Nagel, Thomas. What is it like to be a bat?” (1974)

Oppy, Graham and Dowe, David, “The Turing Test”, The Stanford Encyclopedia of Philosophy (Spring 2011 Edition), Edward N. Zalta (ed.),

URL: http://plato.stanford.edu/archives/spr2011/entries/turing-test/

Van Gulick, Robert, “Consciousness”, The Stanford Encyclopedia of Philosophy (Summer 2011 Edition), Edward N. Zalta (ed.),

URL: http://plato.stanford.edu/archives/sum2011/entries/consciousness/

Cole, David, “The Chinese Room Argument”, The Stanford Encyclopedia of Philosophy (Winter 2009 Edition), Edward N. Zalta (ed.)

URL: http://plato.stanford.edu/archives/win2009/entries/chinese-room/

McDermott, Drew “Artificial Intelligence and Consciousness”, Yale University (2007)

URL: http://www.cs.yale.edu/homes/dvm/papers/conscioushb.pdf

Silby, Brent. Wittgenstein: Meaning and Representation – What does he mean?, University of Canterbury(1998)

URL: http://www.def-logic.com/articles/silby013.html

Tallis, Raymond. The Philosophers Magazine, Why minds are not computers (2004)

 

Advertisements

Phone notes

I got drunk and puked on the toilet. My friend told me to stop texting and appreciate reality – that’s when it’s most beautiful. For a moment, I felt like I was melting into the walls, into the sofa, into people, into conversations. Things felt at some sort of equilibrium – either the outside matched the chaos inside; or the inside was finally tamed down.

I screamed at someone obscenities in Greek, Portuguese and Italian. I have blurry pictures and a random footage of brilliant minds talking absolute shit. I love it when serious people are still able to generate huge amounts of absurdities.

I slept in a couch, with a pillow over my head so my back hurts but things felt ok. Not just then, temporarily, but also, the future felt ok. A good friend is leaving this place and it feels somewhat nice that I am sad about it.

Traditional Indian Prayer

When I am dead
Cry for me a little
Think of me sometimes
But not too much.
Think of me now and again
As I was in life
At some moments it’s pleasant to recall
But not for long.
Leave me in peace
And I shall leave you in peace
And while you live
Let your thoughts be with the living.

The Native American Ishi People Of The Pacific Northwest

Mantras

– you will never understand others fully.

– nobody is waiting or holding a place for you.

– it’s not “in the end, you are alone”, it’s more like, you are always alone – so, I guess, cherish the few times and those who make you forget about this fact.

– you can’t unsee / unhear things. point proven below.

 

Where do kids hang out nowadays?

When I was growing up, online hangouts occurred on IRC, forums or altavista chatrooms. All these platforms had one thing in common: anonymity. The internet was objectively known to be a creepy place.

The first time I went online, I was around 9 years old, around the turn of the millennium. I registered a nickname on my country’s IRC network. Radark, I thought it sounded super badass. I convinced a group of people I was a 16 year old hacker. I had watched my older sibling play around with Netbus and Sub7 a few days ago, so seemed easy enough. Plus, the copied book we got sent by snail mail called “Hacking” gave me some street cred.

So off I went on one of my first online adventures, convincing a group of computer science enthusiasts or students that I was a 16 year old hacker. And it all went well for the whole time of 3 hours, until my sibling came back and told them Radark is a 9 year old kid.

Woopsy. Too bad, great thing I could just do /nick aNewName and off I go, another personality, others to fool, but more importantly, others to converse with. While growing up, I chatted to a lot of people. I did not have many friends in real life, so the internet was a great place. In particular, I could talk about anything without fearing the repercussion of my opinion not being accepted. And as easily as you could create a personality, you could erase it. And it was fine, nobody cared. Granted, you do /whois to figure out whether a nickname was registered or not. But that was pretty much as much as you would know about your conversation partner.

Nowadays, I don’t know where kids hang out anymore. I hope I am wrong, but I have the impression kids hang out mostly on Instagram or Snapchat (Facebook is for old people). And this is worrying because all these platforms have, in its core, the user-centric aspect of it. These are not primarily chatting platforms, despite being used as such. Indeed, they are broadcasting tools – the user is incentivized to share content, which ends up often being personal content (in the form of pictures, videos or others). This also means that changing/deleting profiles is actually an annoying process because you put some effort into these profiles. It also means that the barrier to randomly chatting to someone is now higher – it’s not longer whether the nickname has been registered or not, but whether the other person has shared some content themselves, has enough followers, etc.

I think in general, people don’t think much about the information that can be inferred from public social media posts. Indeed, the current state of the internet incentivizes users to not think about it: it’s cool, link your phone number to your facebook, to use your real name, share geotagged content, win points, whatever. All these academics publishing stuff on user privacy, they are tinfoil hat dudes.

Anyway, my hope is that I am completely wrong, and kids just use Discord now.

In need of more science, less reality.

This person I know moved back to Switzerland and now lives half of his time in his small hometown, while working in our city the other half. I thought it was strange, at first. But recently, I have been having the urge to go back to a familiar place too. I think about going back to my country. I don’t know what is left for me there, but I guess it’s the closest thing to home, despite my changed mannerisms, the 8 years outdated slang and anglo-influenced linguistic constructions.

Sometimes, when I am heading back to my apartment, I think about spending time with my mother, going to Pingo Doce, disappearing in the streets of the capital and feeling the sun on my skin. I imagine working as a programmer in a small unknown company for a small salary, dealing with data visualisation, python notebooks, well established libraries, and devoid of performance or well-posedness concerns. I imagine feeling content, like I can actually do something and know just enough. I imagine feeling unchallenged and not like a loser. I am ashamed to admit it, but I don’t find as revolting as I used to.

Directing action

Few days ago, in between cigarettes, I got into a brief discussion about the immorality of making a lot of cash, in particular in financial markets. It’s a very popular stance, to hate bits of capitalism while enjoying the benefits coming from it.

I think it has to do with what you do with the money. It makes me think, sometimes I do consider going into finance. Said friend told me about the possibility of getting between a quarter and half a million per YEAR (minus tax) in some countries, utilizing the skills we have.

Imagine how much you can do with that money. I don’t mean buying shit or a house. I mean, fuck, would it be possible to fund research or relevant projects with let’s say, a chunk of your salary – around 100k? In Switzerland, this is the salary of 2 PhDs per year. In the UK, probably you could fund 5 PhDs per year.

Lately I have been thinking what can you do to become an influential individual. With the outcome of these elections in the U.S, I think a lot of people are wondering the same. My friend’s post on FB makes me think:

“While at school I always wondered what the German people must have felt like when Hitler was progressively taking on more and more power. Could they see? Were they scared?
I am and yet I’m short on ideas on what to do next. Should I be protesting? Should I be getting into politics? Should I be seeking to get citizenship? Should I be volunteering at schools?”

Protesting? Politics? Volunteering? I don’t know. I have a bad impression of politicians and politics in general. We need several things – putting it in words of Zizek, we need the closening of the class gap. Furthermore, there’s the need to somehow make a cultural impact, promote the right values, fix real problems such as climate change or world hunger. I can’t see this happening in politics, I see this happening despite politics through scientific, technological advances and philanthropy. (Of course I am biased as fuck.)

There are two realms so to speak: the soul and the objective – for the lack of better words. The soul concerns things such as ethics, values, etc – and it’s my belief that these things are better addressed through cultural movements – in one word, arts. The second real, the objective, are material things, engineering questions, like how can we exploit the resources of the Earth in a sustainable way, how can you  better distribute and deliver medicine and food to places, how can you curb the CO2 emissions, etc. These are scientific-engineering questions. Both of these things are important, and furthermore, I believe the possibility to efficiently tackle these objective problems comes from the fixing of the soul too – i.e. you need to give a shit – to have an irrational emotional response to drive you to tackling the problem.

Anyway, I brought the idea to a friend, about selling out and making money (even if it’s by exploiting financial markets) with the ultimate goal of philanthropy. However, they remarked that whatever problem I come up to solve, I might just end up widening the gap between the rich and the poor – I  don’t mean Europe poor, I mean the exploited third world country poor. So another question arises, how can you figure out what needs to be solved if you never come in contact with these problems in the first place?

Reverberating my friend’s question, what the fuck is one ought to do?