After the votes.

We did not comment here on the recent referenda for several reasons. The one about the age of Presidential candidates attracted very little interest, and that about marriage equality- as it was termed- was probably the subject of too much commentary. Moreover I for one sensed that much of what was said after the result became obvious overestimated the importance of what had happened. Sometimes it is much better to wait.

Those interested in further elucidations of marriage issue should glance at two good pieces, the first by Ruth Dudley Edwards in the Telegraph ( not among our links but easy to find )  is written by a straight who would have voted YES, and the latter by Patrick Manning ( a gay who voted NO.) on his website “Thicker than Talk” to which do link.

On reflection  I am not unduly agitated about the constitutional change involved- although I was certainly surprised by the size of the majority for gay marriage. Nor was I that concerned by pitiful contributions made by Messrs Cameron and Kenny- the former of course needs the support of the latter to get him out of the hole he has dug himself over Europe.

In truth the episode was not a very interesting one.  The nature of the Gnostic “causes,”  (nazism, feminism etc.) of which the gay marriage agitation was of course one, has been well understood since Richard HookerRichard-Hooker analysed the Puritan case in the seventeenth century. More recently our understanding has been refined by Eric Voegelin, Norman Cohn, and Michael Burleigh. What was to some extent new was the degree  to which the Yes campaign was funded by outside business interests (which was well described in a recent issue of “Phoenix”) but as Cardinal Newman pointed out in very different connection- two can play at that game!

Nevertheless Mr. Manning, who writes too little, is right when he says that danger inherent in the whole episode, is the precedent which has been set. But how grave is the threat? I doubt that a cascade will have been unleashed. After all the Irish electors may have misjudged gay marriage, but they showed their common sense when they rejected the admittedly idiotic proposal to lower permitted age of potential Presidential candidates. The culture war is certainly not over in Ireland. The coming battle over abortion will be a very different kettle of fish, even more divisive, and much less financially one sided!

I voted No. But this may have been a mistake. Abstention might have been a better choice; as, after the tumult has died, and most of  the posters removed from the telephone poles, the whole sad saga reminds me of a remark made by my late mother, who when obliged to read an account of some farmyard antics, memorably announced “ I don’t mind being shocked, but I won’t be bored!” R.M.

Nonsense about computers.

 

By Robert CB Miller 1
1 Science Fantasy.
There is a popular modern fantasy that human beings could be supplanted by computers or robots. They could, it is suggested, become so much more intelligent than humans, that they could cease to be human servants and become our masters. In fifty years or less, it is supposed, artificial intelligence will have equaled and surpassed the natural intelligence of even the smartest human beings. Already computers have beaten chess masters and similar greater successes are confidently predicted. Indeed it is suggested that they could take control like the malevolent computer HAL in Stanley Kubrick’s science fiction film ‘2001 A Space Odyssey’.space oddyssey

A related fantasy is that one day, soon, it will be possible for human beings to obtain immortality by uploading their minds into computers. By doing so they will achieve a form of immortality as computer hardware, software and storage. If the system wore out the human software and data could be transferred to another computer system.

These supposed possibilities are fueled by the advances in neuroscience and artificial intelligence. The assume, what is almost a common place, that human beings are really computers or robots which just happen to be made of made of meat. Both of these fantasies are just that – fantasies. They are based on illusions wrought by a misunderstanding of the psychological qualities and attributes that can be applied to human beings and some animals. The purpose of this paper is to try to remove some of these misunderstandings and to show up these fantasies for the illusions they are. Much of the argument derives from the writings of MR Bennett and PMS Hacker (Hacker, 2013) (Bennett & Hacker, 2008) (Bennett & Hacker, 2003) (Hacker, 2007) (Hacker, 1986) and it is in the philosophical tradition of Aristotle and Wittgenstein.wittgenstein We will explore in turn both of these extraordinary modern beliefs.

2 Nonsense about computer intelligence.
The danger that we face being supplanted by super intelligent robots, has very eminent sponsors and supporters in famous universities. For example:

Professor Stephen Hawking (Cambridge UK) recently claimed that there was indeed a danger that:  “[An AI based computer] would take off on its own, and re-design itself at an ever increasing rate. …. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” (Cellan-Jones, 2014)

Bill Gates has similar fears and stated recently “I am in the camp that is concerned about super intelligence,” (Statt, 2015)

Ray Kurzweil (MIT) has suggested that by the year 2045 computers will have become more intelligent than human beings and this will mark the ‘singularity’ when computer intelligence will exceed that of human beings and that we will no longer be able to control it.

Fears that humans may be supplanted by computers are based on series of misunderstandings and misconceptions. It is not that there is a limit to the growth of the intelligence of computers which  Stephen Hawking or Bill Gates have overlooked. Or that scientists have (or very well could) discover that there were limits to the increase in computing power. It is rather that it makes no more sense to speak of a computer being intelligent than it does to say that a slide rule calculates or that a stone tool cuts. (Of course the latter may be used by a human being or monkey to cut meat or skin, but that is a different matter.) It makes no more sense to say that a slide rule or computer calculates than attempting to estimate the weight of laws of England or to determine the colour of the rules of chess or the geographical location of the penal code. The statement that computers will become so intelligent that they threaten human kind are on a par with fears that on a long voyage the reading of the passengers may increase the weight of their learning so that the ship may become overloaded and sink.

But why is it that so many people are deceived and fancy that it makes sense to speak of the intelligence of computers and its increase? Why is it that we fall so easily into the trap of thinking about the intelligence of computers when we can so easily see error of speculating about the weight of the laws of England? There are no physico-legal research programmes to investigate the weight of books in law libraries. Why is this nonsense latent rather than patent?

The explanation is that we have a natural tendency to reify metaphors. Take the example of a chess playing computer. It is easy enough to say that ‘Deep Pink 1.1’ beat a chess Grand Master and World Champion. And also to say that if Deep Pink 1.1 loses then a new improved version Deep Pink 1.2 would win because it is more intelligent. But these terms ‘play’, ‘beat’ and ‘intelligent’ with reference to the various versions of the computer Big Pink are metaphors. Big Pink no more plays chess, wins at chess or displays different degrees of intelligence in various versions than a slide rule divides or multiplies.humans v computer

But what really happens? The match between the grand master and Deep Pink is between a human chess expert and other human beings who have programmed a computer for use as a tool in playing chess. The match is between programmers using the computer and the grand master. This reveals the crucial point that all computers, indeed all possible computers, are tools designed, programmed and built by human beings. (For a similar line of reasoning, see Professor John Preston’s article “What are computers ( if they are not thinking things)?” ( Preston, June 2012 )) It may be objected that computers can innovate and produce unexpected effects. But all tools are like this. No doubt our ancestors discovered that stone tools could be used in a variety of different and sometimes unexpected ways. Again, it may be claimed that computers can or will be able to programme themselves. But their ability to do this will be constrained by the programming of their human originators. No doubt they will produce unexpected results good and bad, but this is in the nature of tools. And they can be often altered to remove the bad results and enhance the good.

There was much excitement when the powerful IBM computer ’Deep Blue’ beat the chess World Champion Gary Kasparov in a game in February 1996 and then in a match in May 1997 (both under tournament rules). In fact this is a mis-description of what happened. A team of IBM computer programmers led by Feng-hsiung Hsu and Murray Campbell using ‘Deep Blue’ beat Gary Kasparov at chess. This is no more surprising or worthy of comment that if a competitor with a dictionary won a spelling bee against competitors without dictionaries. The fact that Deep Blue is preserved in the Smithsonian Museum in Washington is a telling sign of the depth of the confusion.kasparov_deepblue Similar considerations apply to the victory in February 2011 of the programmers of the IBM computer ‘Watson’ in the TV general knowledge quiz game ‘Jeopardy’.

We can now see why it is a mistake to reify the metaphors of playing and winning a chess and displaying intelligence when applied to computers or other tools. It involves ascribing psychological predicates which properly can be only ascribed to human beings and (some) animals. Take the question of whether human beings and animals can be happy, thoughtful, sad, animated, in pain or anxious – or any other range of psychological predicates you please. Plainly it makes sense to say that normal person can be all of these things. It is also true that a monkey can be happy, sad or in pain. But one has doubts about whether a sheep can be happy or sad, and it is just crazy to speculate whether an ant or an amoeba can be. If a biology doctoral student proposed a research project to discover whether amoebas could be happy, he would be rapidly shown the door. And this is not because the project would be too difficult, but because it is completely nuts. Of course there are doubtful cases. The Portia spider appears to deliberate on the best means of catching her prey, but does it really? How would we find out? What tests could we apply? But certainly this cannot be said of an ant or an amoeba. The point can be illustrated by Wittgenstein’s famous statement that a dog can fear that his master will beat him, but not that he fears that his master will beat him tomorrow. (Wittgenstein, 1968 (first published 1953), p. 166e)

Some of these distinctions are illustrated in the following table.Uploading Nonsense table Mar 6 2015 (2)
The table shows whether assorted psychological attributes can be sensibly applied to assorted creatures. All can and are applied to (normal healthy) human beings and none to amoeba. One would be thought crazy (or more likely making a joke) to say that an amoeba was deliberating about the source of its next meal or feeling sad because it had nothing to eat. The point is that to ascribe predicates we need criteria for their application, and without such criteria we have no basis for making any attribution.  Such criteria are obvious and usually hard to misinterpret. When we see a person injure themselves then it is no deduction or the result of a theoretical process to be able to exclaim that the person is in pain, or more probably rush to their aid.2 This analysis should not be construed as behaviourism, the (now old-fashioned) idea that psychological predicates merely represent behaviour. Although we can suppress first person avowals of pain for example it does not follow that the criteria for the proper use application of the word are not behavioural. If the expression of pain by avowal ‘That hurt!’ and behaviour, rubbing the place of an injury or crying out, never took place the word ‘pain’ would be without use and meaningless.

But note that we can only do this if the creature is like a human being. And the more like a human being it is in appearance and behaviour the more psychological predicates we can apply to it. Thus monkeys can have more psychological attributes applied to them than sheep, ants or amoeba. And note too that by saying that we can apply psychological predicates we just mean that humans and animals can feel pain and that ants and amoeba do not.

But what about computers and humanoid robots? Surely the latter can be constructed to mimic the behaviour of human beings so that they could be said to acquire psychological attributes. If a robot behaves approximately like a human beings surely it could be said to be sad, happy, thoughtful or in pain. The difficulty is that neither a computer nor a robot can be anything like a human being or an animal. First computers and robots are tools, no different from stone tools, steam engines, slide rulesslide rule or laptop computers.  They are always wholly the instrument of their designers and users as they are designed, programmed, built and used by human beings. They can have no ends or purposes of their own. Second, neither computers nor robots can behave like human beings or animals. They do not seek out food, eat, drink, tell jokes, feel pain, scratch themselves, agonise over decisions, seek and find shelter or reproduce. Third, they are not made of flesh, but of plastic, silicon and metal.

Another obvious difficulty is that because they are designed, programmed and used by people they cannot be responsible for their actions. Take the familiar distinction between agent and principal, a builder and his client for example. Insofar as he acts for his client the builder is his agent and has no responsibility for the actions which he takes on his client’s behalf. But a robot can never account for its actions or take the blame if things go wrong. All its actions will be the responsibility of its users, programmers and designers. The share of responsibility for its actions between these individuals (or teams of individuals) will vary from case to case. It is not for nothing that ‘robot’ is derived from the Czech word for ‘robota’ which means forced labour as done by serfs. But unlike a real slave the robot will not and cannot have any residual interests and purposes of its own. This means that computers and robots can never emulate the vast swathe of human behaviour which involves responsibility. An entity which cannot have responsibility is necessarily completely unlike any normal human being who is answerable for its actions.

Imagine someone discovering a watch on a sea shore. He picks it up and examines it.18 century watch He winds it up and it ticks and he reasonably concludes that it is an artefact. But suppose he sits on a rock and watches it. After five minutes, he notices that it moves. It then scurries off and climbs into an empty sea shell. It is hermit watch. After observing it for several days the biologist discovers that it hunts for food, hides in rock pools and moves to larger and larger shells as it grows. It also mates, lays eggs and (minimally) cares for its young. If the watch did none of these things and only had the ‘behaviour’ of a watch (ticking and showing the time) it would be reasonable to conclude that it was designed. On the hand the animal like behaviour of the hermit watch would lead the observer to conclude that he had discovered a new seashore creature which no human being had designed. But if it is designed it is a tool or an instrument of its designer and creator. Its only purposes are those of its designer, owner and user. This rules out the designed entity from being an animate object which must have purposes of its own and not those of someone else.

3 Playing with words?
Surely, it may be objected, this is just playing with words. How can what we can say about something determine what is really the case? But we can only discuss the relevant concepts in words, because that it is the only medium we have for using and expressing concepts. It follows that it is only by exploring the ordinary contexts where concepts are used and originated that we can discover their logical character. Of course we can extend the use of terms if we wish. We can stipulate that the word ‘calculate’ can be applied to both human beings and slide rules and laptops, but we do so at the risk of confusing ourselves into thinking that the word has different logical characteristics than is the case.

It is all too easy to succumb to the error of reifying metaphors about the mind. For example the philosopher Sir Anthony Kennykenny_anthony gave the example of the psychologist RL Gregory who carefully warns against the error and yet immediately makes the very mistake that he warned against. (Kenny, 1971) The way to avoid the error is never to apply psychological predicates to anything other than human beings and animals. Thus if you say that the human brain (or even part of the brain) literally perceives (for example) then, as Kenny points out, you are committed to the existence of a homunculus in the brain who does the perceiving. This sets up a vicious regress with each homunculus inside requiring another to do its perceiving.

The problem can be avoided by studiously avoiding metaphors and refusing to assert that brains think or that the pre-fontal cortex sees or that computers calculate. The list of such reified metaphors is huge but they can all be misleading. In order to avoid nonsense and confusion different terms should be used to describe the thoughts and actions of human beings and the performances of computers. For example, people can rightly be said to calculate but computers can best be described as ‘working’. This reduces the temptation to think that computers are doing the same thing as human beings.

Attention to these by conceptual distinctions is not ‘playing with words’ or mere stipulation as to the meaning words, but it is rather a matter of clarifying distinctions between concepts with a view to avoiding confusion. Wittgenstein considered the phrase: “I’ll teach you differences” as a motto for his Philosophical Investigations. (Rees, 1984, p. 157) He also said that the role of the philosopher was to turn disguised nonsense into patent nonsense. (Wittgenstein, 1968 (first published 1953), p. 133e)

Kurzweil2029

Kurzweil being even more optimistic!

4 Singularity Nonsense.
Let us now examine in detail, Ray Kurzweil’s suggestion that by the year 2045, computers will have become more intelligent than human beings. But intelligence is an attribute of human beings and some animals. Very intelligent human beings can do quantum mechanics, less intelligent people can do arithmetic and monkeys can make simple tools and cats and dogs can discover how to open fridge doors and eat what they find inside. It is supposed that really human beings are super complex computers and animals are less sophisticated computers. If this is the case then it should be possible to build more and more intelligent computers until their intelligence exceeds that of the smartest human being. It is assumed that there are no limits to intelligence. Animal intelligence increases by the slow process of evolution and natural selection. But artificial intelligence embedded in machines is not limited in the same way. As we have seen Ray Kurzweil has predicted that by the year 2045 artificial intelligence (AI) will exceed the natural intelligence of human beings. There are signs, it is claimed, that this is possible and indeed likely. They have beaten grand masters at chess (as we have seen), they can translate one natural language into another and they have even solved mathematical problems which had eluded human beings – the four colour problem for example. The increase in the intelligence of computers, Ray Kurzweil argues, has been exponential. And on the parallel of other technologies its exponential growth is likely to continue and shortly it will outstrip the intelligence of human beings. This in the opinion of some people (notably Bill Gates240px-Stephen_Hawking_StarChild and Stephen Hawking) creates dangers, as the super-intelligent computers may come to displace human beings and do us harm.

In analysing the possibility of artificial intelligence we need to differentiate between the two types of intelligence. As we have argued above the concept of machine intelligence, the intelligence of computers, pocket calculators and slide rules is a metaphor. Even we stipulate that we can call what computers do intelligence then it is not of the same kind of intelligence as that of human beings and animals. The original sense of intelligence, on which the other sense is parasitic, is based on the intelligence of human beings and animals. It is obviously true that Stephen Hawking is smarter than the author, that the author is cleverer than a dog. And that dogs are brighter then sheep and sheep cleverer than ants. But are amoebae stupider than ants? And was Einstein cleverer than Stephen Hawking is? At these extremes our concepts of smart, clever, stupid and bright break down. We have no clear criteria for their application and consequently we are at a loss in deciding whether ants are smarter than amoebae. We can measure the intelligence of human beings through the use of intelligence tests which provide criteria for awarding IQs. But we cannot do this to animals who cannot speak and so it makes no sense to attempt to attribute IQs to non-verbal animals. And it scarcely makes sense to decide the intelligence of Stephen Hawking and Einsteineinstein1 by estimating their IQs. In any case, it is hard to see on what grounds one decide whether the problems solved by Einstein were more difficult than those solved by Stephen Hawking. It is almost as if intelligence were like the electro-magnetic spectrum, where only tiny fraction is made up of visible light. Similarly there may be only a small fraction of the spectrum of intelligence where metrics like an IQ test have any application. Or meaning. It just does not make sense to apply an IQ test to a shrimp or to determine the pre-eminence of Stephen Hawking or Einstein by applying an IQ test, or indeed any kind of test.

In the case of computers we are in completely different territory. Intelligence can be only be ascribed to inanimate objects used as tools in a derivative fashion. It may of course display the intelligence of its designers and programmers, but it has no intelligence of its own. But surely, it might be objected, some computers are more capable than others and that it makes sense to say that one computer is more intelligent than another. Indeed this is so but it risks confusing the intelligence of animals with that of machines. To avoid confusion it is preferable to refer to the power of a computer and its associated software. This is often gauged in terms of the physical characteristics like the amount of Random Access Memory or in the case of supercomputers – its ability to perform calculations in teraflops per second.

Given the different kinds of intelligence applicable to human beings and computers it makes no sense to say that computers will become more intelligent than human beings. In attempting to make this comparison one is not comparing like with like. It is like comparing the intensity of the blue of the sky with average weight of elephants. As we saw before it is all too easy to become confused if we reify metaphors – like the ‘intelligence’ of computers.

It follows that unlike Stephen Hawking and Bill Gatesbillgates we should not be worried about the possibility of human beings being displaced by computers. Given that they all designed and programmed by human beings and can only have the motives and intentions of their designers we have no need to fear them. Still we should worry about their misuse. People are often reckless, stupid and cruel in the uses to which they put their weapons, that important sub-class of tools. But such folly does not require the existence of computers. Consider the ‘Doomsday Machine’ which was a notional machine invented in the 1960s to explain the concept of nuclear deterrence. The notional machine was a massive H-bomb that would destroy the world if it exploded. Its explosion would be triggered by any attempt to disable it or if another nuclear weapon were detonated anywhere on the globe. Of course no such machine was ever built but it shows how tools recklessly designed and used could cause enormous harm. This is the only sort of threat posed by reckless of malevolent computer engineers and software programmers.

It is instructive to apply our analysis to the fictional malevolent computer HAL in Stanley Kubrick’s science fiction film ‘2001 A Space Odyssey ’ which is often used to illustrate the future danger of super-intelligent computers. In reality, the immediate response of the spaceship crew would be to believe that their computer had been hacked – or that they were the victims of a reckless practical joke.2001_a_space_odyssey_hello_dave There next step would be to request the ground controllers to upload a patch to correct the error – or to attempt to do it themselves. It would make no sense to blame the computer when its actions were decided by its programmers who might have included malevolent computer hackers.  And even if the computer had been designed with a kind of ‘autonomy’, still the responsibility for its actions would lie with its designers and programmers (again possibly including hackers). And the ‘autonomy’ of a computer is only the structure of its software that allows it to follow an algorithm in a particular way without recourse to a human decision.

Science fiction, as in the case of ‘2001 A Space Odyssey’, can be particularly misleading as it makes it difficult for us to see the natural and obvious reactions of a real spaceship crew when faced with a HAL. Because in the story the crew act is if the reified metaphor were true, we do not see the conceit for what it is.

4 Uploading Nonsense.
The possibility that human minds might be uploaded into computers and a form of immortality achieved also has eminent supporters. It is perhaps no surprise that they include both Ray Kurzweil and Stephen Hawking. (Wu, 2015)  Whole Brain Emulation (WBE) is the process by which it is supposed a brain can be copied or emulated in computer hardware or software. On the assumption that a mind is merely a computer it seems possible that it could have a carbon rather than a meat substrate.

Our earlier analysis of ‘computer intelligence’ helps to show us why it does not make sense to think that we will ever be able to upload ourselves into a computer and achieve a form of immortality. This is not because of some technical difficulty which may or may not be possible to resolve. There are indeed extraordinary technical difficulties in emulating the brain in hardware and software but it is assumed that these the problems can and will be overcome eventually. (Cattell & Parker, 2012) It is rather that it does not make sense to upload the mind into a computer and that no amount of technical change or the development super powerful computers can make sense out of nonsense. The idea is based on the misconception that human beings are really biological computers and software constituted from flesh, blood, bones and brain – or that they are ‘meat machines’. The process of uploading a human mind into a computer is considered to be identical, in principle, to the transfer of data and software from my old Dell desktop to my new HP laptop. It is only lack of computer power and technical skills that prevent the uploading of the human mind into a computer.computer_pieces

The possibility of uploading a mind is advanced by Randal Keone of the website Carboncopies.org as its name suggests. For example in an article in 2013, Keone and a colleague (Randal Keone and Diana Deca, 2013) argued that it was the role of neuroscience to make representations of what they seek to explain. They quoted Richard Feynman: “What I cannot create, I do not understand.” (Randal Keone and Diana Deca, 2013, p. 1) And they argued that to understand the human mind which they take to be identical to the human brain, it was necessary to create a representation of it in a computer. This led them to suggest that the ultimate aim of neuroscience is ‘Whole Brain Emulation’ (WBE), and the possibility of ‘uploading’ a mind into a computer. As we have seen, it has even be suggested that this will make possible a form of immortality.

But despite the eminence of its originator the assumption that we need to make a representation of something before we can understand it is mistaken. We cannot take literally the statement that we cannot understand what we cannot create. There are all sorts of thing which we can understand without being able to create them. But if Feynman (and Keone and Deca) mean that we can only understand something if scientists can make a model or a mathematical representation of it then this is also false. For example we can explain the behaviour of human beings without models, representations or theories. The question: “Why did you go to London yesterday?” is satisfactorily answered by the statement: “I went to see the Turner Exhibition at Tate Britain.” No further information would add anything to the explanation which is complete in itself.

Maybe an explanation is sought in terms of neuroscience with a description of brain activity. But it is hard to why such a description of a chain of events in the brain would be of interest or relevance to the reason for my trip to London. And my trip to London and my reason for going are just a single example of all the activities, plans, thoughts, intentions, desires, memories, pains, knowledge and passions that go to make up the mind, character and behaviour of a human being. (Bennett & Hacker, 2003, pp. 428-429)

It is worth reviewing in detail why the understanding of the brain in terms of neural activity is insufficient to ‘explain’ or understand all the characteristic states and activities of a human being.  Understanding, Koene and Deca explain, is to be understood as followers:

“Brain emulation strives to achieve a functional implementation by which it is possible to predict an active brain state and behavior at a time t = dt (with acceptable error) if we know the state at a slightly earlier time t.” (Randal Keone and Diana Deca, 2013, p. 2)

The explanations offered are causal, in other words scientists can discover that a certain brain state (or part brain state) at one time will lead invariably to another brain state at a later time. In the example of human brains, the firing of neurons and the electro-chemical relations between different parts of the brain are the processes which are investigated with a view to making predictions about successive brain states. In turn, it is supposed, these predictions about brain states will give predictions about mental states – the knowledge, desires, wishes, pains, fears and intentions of the person whose brain is being emulated. leonardo-vitruvian-manFor example we can give a description of a cricket match in purely scientific material object terms. This will include the movement, speed and weight of cricket balls and the gross movements of the players, but what it cannot do is to indicate who wins the game. This is because the description of the physical events and processes does not and cannot include the rules of cricket. However much information we have about the movement of the ball and the movements of the players we cannot tell who has scored or who has won, or many of the other facets of the game of interest to cricket fans. Nothing in the passage of the ball over a white line can tell us that it is the boundary which wins the game, unless we know the rules of the game and these are not to be found in the bare movements of the ball and the players.

Or take another example a picture is more than splashes of paint on a canvas or writing no more than marks on a piece of paper. No amount of extra information about the quality of the paint or the chemical composition of the ink can tell us anything about the subject and significance of the picture or the meaning of the writing. It follows that the brain processes and states cannot be identical with the mind of a human being – the term ‘mind’ being shorthand for the psychological states abilities and attributes of a human being. A mind is to a brain as a novel is to marks on a page.

Whole Brain Emulation (WBE) assumes a form of reductionism – the idea that for example biology can be explained in terms of chemistry and chemistry in terms of physics. But whatever the success of this programme in the natural sciences, it can be shown not to operate between the electro-chemical processes of the brain and the psychological attributes of the person whose brain it is. Indeed it leads to a welter of puzzles and difficulties which have no obvious solution. For a start, two people can have the same thought: “Bristol is more than 100 miles from London” but there is no reason to suppose that their brain processes will be the same. This is because the identity criteria for thoughts and brain processes are different. (Bennett & Hacker, 2003, pp. 360-361) Thus while thoughts (and other psychological attributes) can be ascribed to a human being they cannot sensibly be applied to a human body part.

p geachAnother difficulty, pointed out by Peter Geach, is that thought does not share the same time continuum as physical processes. Thus I can say that I thought of going to London yesterday afternoon. But although it makes sense to ask when I finished the thought that my garden is looking good, it makes no sense to ask when I was half way through it? As Geach points out the continuous past of ‘think’ has no such use. (Geach, Mental Acts, 1957, pp. 104-105) (Geach, God and the Soul, 1969, p. 36) And, of course, it is possible to say that a physiological process takes place at a particular time and when the process is half completed. This means that no physiological or electro-chemical process can be or can emulate thought and judgement.

Yet another difficulty is that uploading a person unto a computer leads to a number of problems about personal identity.3 For example once the uploading has been achieved would the person in question be (a) in the computer, (b) the person from which the ‘mind’ had been uploaded, or (c) both? Whichever answer is given puzzles result.

In the first case (a) it seems nonsensical to say that the person ‘left behind’ would cease to be the person ‘uploaded’. Who would she be? Suppose the person uploaded was found to have committed a crime, would it make sense to prosecute the person left behind leaving the person in the computer to get off Scot free?

In the second case (b) it is the entity left behind which is the person, but then it is unclear who or what is now lodged in the computer? And who or what could be held responsible for any crime committed prior to the upload? The position is the same as for identical twins as one could be said to be a copy of the other and they are two people and not one. (Hay, 2014)

In the third case (c) Would it make sense to prosecute both the person left behind and the person uploaded into the computer. Would both be held completely responsible or would responsibility be shared?

Whichever solution is preferred – the responsibility for actions, an essential characteristic of human beings and human life is seriously compromised. But if this is the case, then WBE appears to lead to confusion and disappointment rather than to a scientific triumph.

A further more general difficulty is that by limiting explanation to predictive scientific explanations, as Koene and Deca propose, we would actually reduce our understanding of human beings. We would be looking at human beings through frosted glass which prevents us from seeing their most important characteristicsmadonna-and-child-1505_jpg!Blog. Thus a description of the firing of neurons and the other processes of the brain or its emulation in a computer cannot tell us much about the human being and her thoughts, fears, plans, intentions and hopes. These are not the sorts of thing that any amount of information about the brain or its complete emulation can tell us.

If attributions of psychological states cannot be ascribed to brain states then a fortiori they cannot be ascribed to emulations of brain states. And they can only be ascribed in a derivative sense to artefacts such as computers. Thus the bytes in a computer’s hard drive are not about anything although the article that can be printed out from the computer file may be. In the same way the marks on a page only have meaning if there is a human being who understands that they are writing and is able to read the language in which they are composed.

It follows that the study of the states and electro-chemical processes of the brain, which may allow scientists to predict a brain state and behaviour at one time given the occurrence of a brain state at an earlier time, cannot tell us about the activities of the human being concerned. Similarly any WBE will be unable to emulate the psychological attributes (or in other words the mind) of the human being whose brain is being mimicked

These misconceptions alone should make us sceptical of the possibility that one day soon (or even in principle) it will possible to upload a human mind (which is usually assumed to be identical to the brain) into a computer as software and ‘data’. But remember that we explained how computers are the just one of the latest of a long series of increasingly complex tools which human beings have designed, constructed and used. Thus computer data storage can be compared to a filing cabinet or library where human beings store papers on books on which information is recorded. And computer software is just the means by which human beings use the machine to sort, analyse, manage, review and manipulate the information contained in it. This is what people do with the contents of filing cabinets and libraries. And the computer software aids this process in the same way that pencil and paper, a slide rule or an abacus facilitates calculation.

But this is completely different from the function of the human brain. There is and cannot be any equivalent to the use of a computer or a filing cabinet. There is no parallel between the way we search a filing cabinet or a document or press the ‘Find’ button in Word or Excel or use Google to retrieve a document or to find a website. We may wrack our brains, but we do not search them as we do a filing cabinet. If we did we would have to postulate another person or a ‘homunculus’ to do the searching – and that merely recreates the problem which the computer model of the person unwittingly requires. (Kenny, 1971) In the most obvious sense the brain contains no information and no equivalent of computer storage. We have once again been misled by the metaphor that our brains ‘store’ information and that we can ‘search our databanks’ to ‘retrieve’ information. In one sense we do not and cannot store information in our brains, although there may be neurological correlates for the things that we remember.  But such correlates are not information and we cannot and do not search our brains for them. It follows that it makes no sense to upload the data and software which constitute a human brain into a computer. This is a task which can never be achieved however powerful computers become. Computers are the tools of human beings and it makes no sense to say that we can be the tools of ourselves. Tools require human beings to design, build and use them.

Conclusion.
In this paper we have tried to show that much of speculative scientific thinking about brains, minds, intelligence and computers is confused. This is because many scientific writers and others have misled by the common human tendency to reify metaphors. Sometimes the traps of reification are easy to avoid. Thus on being told that someone is a tower of strength no one, except as a joke, will claim that it is impossible because he is only five foot tall. It appears harmless to claim that a computer ‘calculates’ or ‘solves problems’ and to extend concepts applicable to human beings to machines and tools. The danger is that we forget that we are using the terms metaphorically. This reification of a metaphor leads us to think that slide rules, abacuses and computers, calculate in the same way as human beings. But this way lies confusion, nonsense and impossible research programmes. And the reification of metaphors leads naturally to the identification of the brain with the human being. In turn this produces additional misunderstandings and confusions.computer attacking man

Worse than confusion, these ideas can lead to folly and even to what amounts to superstition. The philosopher Peter Geach thought it sinister that people could persuade themselves that computers could be made to think and claimed that they were like primitives who believe that inanimate statues can be inhabited by gods.  (Geach, God and the Soul, 1969, p. 41) He may have been on to something.

REFERENCES
Bennett, M. R., & Hacker, P. M. (2003). Philosophical Foundations of Neuroscience. Blackwell.

Bennett, M. R., & Hacker, P. M. (2008). History of Cognitive Neuroscience. Wiley-Blackwell.

Cattell, R., & Parker, A. (2012). Challenges for Brain Emulation: Why isBuilding a Brain so Difficult. Retrieved February 9, 2015, from http://synapticlink.org/Brain%20Emulation%20Challenges.pdf

Cellan-Jones, R. (2014, Dec 2). Stephen Hawking warns artificial intelligence could end mankind. Retrieved Dec 22, 2014, from BBC technology News: http://www.bbc.co.uk/news/technology-30290540

Geach, P. (1957). Mental Acts. London: Routledge and Kegan Paul.

Geach, P. (1969). God and the Soul. London: Routledge & Kegan Paul.god and soul

Hacker, P. M. (1986). Insight and Illusion Themes in the Philosophy of Wittgenstein. Oxford University Press.

Hacker, P. M. (2007). Human Nature: The Castegorical Framework. Blackwell.

Hacker, P. M. (2013). The Intellectual Powers A Study of Human Nature. Wiley Blackwell.

Hay, M. (2014, April 24). Mind uploading won’t lead to immortality. h+ Magazine. Retrieved April 13, 2015, from http://hplusmagazine.com/2014/04/24/mind-uploading-wont-lead-to-immortality/

Kenny, A. (1971). The Humunculus F allacy. In M. Grene (Ed.), Interpretations of Life and Mind. LONDON: Routledge.

Preston, J. ( June 2012 ) What are computers ( if they’re not thinking things)? ( S.B. Cooper, A Dewar, & B. Lowe, Eds.) How the World Computes: Turing Centenary Conference and 8 th Conference on Computability in Europe, 609-615.doi:http://dx.doi.org/10.1007/978-3-642-30870-3_61

Randal Keone and Diana Deca. (2013). Editorial: Whole Brain Emulation seeks to Implement a Mind and its General Intelligence through System Identification. Journal of Artificial General Intelligence, 4 (3), 1-9. doi:10.2478/jagi-2013-0012

Rees, R. (Ed.). (1984). Recollections of Wittgenstein. Oxford University Press.

Statt, N. (2015, January 28). Bill Gates is worried about artificial intelligence too. CNET. Retrieved April 13, 2015, from http://www.cnet.com/uk/news/bill-gates-is-worried-about-artificial-intelligence-too/

Wittgenstein, L. (1968 (first published 1953)). Philosophical Investigations. (G. E. Anscombe, Trans.) Oxford: Basil Blackwell.

Wu, T. (2015, February 22). How to Live Forever. The New Yorker. Retrieved April 13, 2015, from http://www.newyorker.com/business/currency/live-forever

FOOTNOTES

  1. Robert C.B. Miller, is a former Senior Research Fellow at the Institute of Economic Affairs. He is a graduate of Trinity College Dublin.
  2. Our everyday understanding of ourselves and others is sometimes derided as “folk psychology” but those who argue in this way are relying on “folk rationality” and “folk logic.” If they reject one they should surely reject the others too.
  3. The problem of the criteria for personal identity is complex, difficult, and after decades of of discussion amongst philosopphers unresolved.

Selwood on the Reformation.

Visitors to this site with long memories will recall the dispute we had here over the film of “ A Man for all seasons” Robert Bolt’s play about Sir Thomas More.more I was reminded of it by a recent provocative  piece by in The Catholic Herald ( U.K.) by Dominic Selwood. Mr. Selwood asks what would England be like if Henry 8 th had not broken with Rome, and the reformation had consequently never happened.  The article is entitled “What would Catholic England look like today?”

He begins his arresting piece by suggesting that Henry “became fixated on a male heir to secure his lineage ( ironic, given that [both]  his daughters rank among England’s best known rulers.)”

Mr. Selwood’s sentence would have more closely reflected the reality by mentioning not just Henry’s lineage  ( though that was undoubtedly a factor ) but also the need he perceived for political stability. Queen Elizabeth did indeed turn out to be a very great queenqueen eliz1-ermine. (She may indeed have inherited some of her mother’s qualities!) But she was one in a million. It was surely quite rational in Tudor times for Henry to assume that neither of his daughters would turn become one of the most remarkable statespersons the world has ever seen- and we should remember too that Elizabeth was able to achieve what she did only by not marrying.

The really embarrassing nonsense in Mr. Selwood’s exposition is his apparently serious attempt to justify the reign of the unhappy Queen Mary on the grounds that she is “among England’s best known rulers.” “Best known” alas, yes – but for all the wrong reasons: a disastrous religious policy which did catastrophic damage to her own cause, her pathetic false pregnancies, and the loss of Calais- a world-class foreign policy cock-up. One feels that her reign more than justified her father’s fears. ( Could it be that Henry was so desperate  for a male heir because he intuited that his eldest daughter was not quite the full shilling?)Queen_Mary_I_from_NPG

More interesting  than Mr. Selwood’s attempt to belittle Henry’s dynastic considerations is his idea that everything in England would have gone swimmingly if only there had never been a reformation. And he certainly paints an entrancing picture of what things might have been like. He claims that of all the countries in Europe that England was the most Catholic, and would have remained so.

But would it? Mr. Selwood writes that “there is little point in looking to other countries as examples.” Well, why not? Because, I suspect, if we do so the weakness of his case becomes evident. Do not the European examples suggest that if there had not been a reformation in England then the Enlightenment would have taken a more aggressive, and even violent form, and that Mr. Selwood would have found himself faced not with the cozy old Church of England but with the rigours of French style-secularism?Exécution_de_Marie_Antoinette_le_16_octobre_1793

( Had Mr. Selwood been writing for Irish readers he could have made the point that in Ireland the political effects of the reformation were indeed dire because they overlaid religious differences on a quasi colonial situation which was potentially difficult enough…)

We have added to Catholic Herald to our list of British links, where Mr. Selwood’s intriguing  piece and other interesting commentary may be found.

Thinking about Malala

By Richard Miller

Malala book coverFor those who have just stepped off the ferry Malala Yousafzai is the school girl from the North West provinces of Pakistan who was shot by the Taliban. They wanted, of course, to prevent her, and other girls from being educated. Luckily, thanks to British doctors, she survived, and was jointly awarded the Nobel Peace Prize for her role as a spokeswomen for and a symbol of women’s education.

This book, pictured left, is “her” ghostwritten account of what happened, and of the background to the events in question. Given the huge international publicity that surrounded the whole episode, it was to be expected that her book should become a best seller, and it has been I gather a hit among the book groups- although I was able to pick up my copy free in the Arklow recycling centre!

The story that “I am Malala, the girl who stood up for Education and was shot by the Taliban”  by Malala Yousafzai and Christina Lamb ( Weidenfeld and Nicolson)  E 15.99 tells is important. Militant Islam is troubling phenomenon. The Soviet invasion of Afghanistan gave rise to a resistance movement that was fanned by both American and Pakistani intelligence agencies; this ultimately morphed into the Taliban- with results we all know.9-11_Disaster_C11 What is less generally realised is that the movement then spread through those parts of Pakistan where the militants soon came into conflict with those such as Malala’s father who was an entrepreneurial educator.

There is much that is good and interesting about this book. Indeed the chapters of the book which deal Ziauddin Yousafzai’s ( b. 1961 ) career give a fascinating glimpse of the difficulties of setting up a business in the third world. It is, of course, eloquent and persuasive about the importance of women’s education.  The core of Malala’s message is that the very fact that the uneducated are uneducated means that they cannot be part of the broader cultural conversation that influences their lives: they have to be educated to be able to advance their own interests.

There is nothing especially new about this analysis, although the context in which Malala ( or is that Christina Lamb her ghost writer? ) places it is fresh, and genuinely interesting. It is not  an analysis that has won uncritical acceptance by conservatives, largely because our view of what counts as education differs from that of the left. But this is no place to revisit these discussions.

Instead I want to focus on another issue that emerges from Malala’s book, one which flows out of her educational theme. Towards the end of the book, when Malala is struggling with the terrible injuries inflicted on her she reports that her father asked her mother “Tell me truthfully. What do you think- is it my fault?” Her mother had replied “No…you didn’t send Malala out thieving or killing or to commit crimes. It was a noble cause.”

Just what are the responsibilities of parents, teachers, and others in cases such as this? The question posed by Malala’s father is more telling than the unhappy answer provide by her mother. Malala’s mother seems to be saying that we are entitled to risk the lives and welfare of children in any cause that we judge to be good. But is this a sound proposition? I think not. Malala’s father did not deliberately set up the circumstances which led to the attack on his daughter. To suggest so would be quite outrageous. Nevertheless it was his determination- which of course in the nature of things became hers- which led to the terrible injuries that were inflicted on her. In truth Malaya’s life was endangered by what some would say was her father’s bloody mindedness.

And what too of the media zoo which was created in her name.  How proper was this? Granted, she was plainly in full agreement with the cause that her supportersMalala-Press-Conference-12 ( such as Gordon Brown ) advocated. But she was still only a girl  plunged into a culture quiet different from her own. Was her relatively conservative form of Islam consistent with the feminism that surrounded her?

To elucidate the matter let us put another case. In this fictional scenario a boy on his way to a madrassa is injured by some anti-terrorist operation; his supporters launch a sophisticated media operation to advance the cause of radical Islam. Instead of relying on the BBC they turn to Alajeera  and instead of being treated in Birmingham he is looked after in a field hospital organised by the Taliban or some other radical group, to which the world’s press is bidden?

In this case I can already hear the chorus of execration. In Washington the neo- conservative publicity machine would go into overdrive.  Fox news would assure us that what we were witnessing was a cynical abuse of children, of kind we could have expected from those who recruit suicide bombers. Likewise neo- conservative intellectuals would pen think pieces warning us against the newly sophisticated techniques that our enemies were now employing. Indeed references to Dr. Goebbels and Horst Wessel would probably not be altogether avoided.

What are we to make of it all? One of the most interesting parts of the book  is the passage which implies that the hospital where Malala was treated was at least to some extent uneasy about the media frenzy that had been unleashed by her supporters ( even though they too had their public relations team on the case! ) But vague unease is not enough.  Our close attention is demanded by the intersection of privacy, children’s rights, freedom of speech – and in this case the real importance of ensuring that girls are not left uneducated in the Islamic world.

christina lamb

Christina Lamb

One way ( although there are no easy ways here ) of unpacking the problem is to ask whether or not we would want our own children to be instrumentalised as Malala was by those who wanted to use her story for good, even noble, aims.

The answer is far from clear, and depends on the extent to which Malala gave a fully informed personal consent to what was done in her name. The uncritical defenders of what happened though face the difficulty that for much of the relevant time Malala was actually unconscious. Did the statements made in support of Malala become statements attributed to her without her consent? My own tentative thought is that those who spoke in Malala’s name went somewhat too far. However and before criticizing them too sternly we need to remember the circumstances in which they acted. In any event we are Malala’s and her co authors debt not just for this book, and for the statement that it makes, but also for raising these difficult and important issues.