Home | Music | Gallery | Tengwar | Replay Db | Collision | Prototype |
|
Machine Intelligence
Is The Brain Simply a Computer Made of Meat?
Tacet Blue 2004
The question itself raises many other questions such as: · If it is, can we simulate it? · If we simulate it, will it be conscious? · If it’s conscious, does it have rights as a living being? · Are its thoughts the same as ours? · Do androids dream of electric sheep? (Phillip K Dick, 1968). There are two distinct camps of thought on this subject. There is the “Weak AI” and the “Strong AI” The debate itself has been going on far longer than the introduction of modern computing devices. From the story of Prometheus to Mary Shelby’s Frankenstein, the subject continues to be controversial. Some may argue, why there is research in this area at all, as there is no shortage of people. Margaret A. Boden(1990a) suggests that continued research in the field of AI may not create a synthetic brain, but that this sort of research could provide valuable insights into how we think and aid medical study in cases of neurological damage. In effect, the goal of this sort of research may not be the actual breakthrough of creating a computer system that can think, but the offshoot technology that develops along the way. So does the brain behave like a computer? To answer this we first have to be clear about our definition of a computer. There was a time when an abacus was called a computer; a person could even be employed as a computer. In the latter case a computer therefore already has a brain, and it’s made of meat. But today when we say computer most people will think of the grey boxes that lurk under, or on their desks, or maybe the mighty Cray computers. These type of computers consist of a single processor (maybe dual or quad, or clustered) some memory to store instructions and data and a backing store to hold information permanently. In simple terms these computers process data sequentially, breaking down a task into tiny segments and then combining this according to software to produce an output. This type of a computer does not lend itself well to the model of a brain. Briefly mentioned above is the “Strong AI” camp. These people believe that minds are just computer programs implemented in brains, and perhaps in other sorts of computers as well. (John Searle 1999). As there is no difference between a program implemented by the brain or a silicon computer, then one day a computer powerful enough, will be seen to be, not only intelligent, but also conscious, or self aware. Strong AI believes that it is not a case of if the brain is a computer made of meat, but instead it is a case of when will we recreate it. The “Weak AI” camp believe that even if a computer can display the actions or reasoning of an intelligent man, or perform a task that to complete, intelligence is required, then the Weak AI camp would deny any consciousness as the computer is, at best, only simulating thought or intelligence. One of the main issues for Weak AI is the humanist view that machines are incapable of truly purposive action (Margaret A. Boden 1990b), that is to say that one only performs an action when they have a reason to do so. Whereas the current computer model, only performs actions in response to a command or set of rules, essentially a computer does not consciously make a decision. The word conscious is often thrown up by the Weak AI as a requisite for true AI. Can a computer program ever be self aware? And can it really understand how it is calculating a task. If it could be shown that a computer was thinking, then would we accept it as conscious? In Dark Star (John Carpenter 1974) Commander Doolittle asks bomb number 20 “but how do you know you exist?” and the bomb responds “I think therefore I am, I like this game…”. If the brain is like a computer, then it should be possible to create a computer system that would understand questions like that. Today’s reality is very different, computers are excellent at processing data in a reliable and robust way, and can model real world systems very accurately…if …the system being modelled is fully understood. This is one of the issues that sides me with the Weak AI, for if the puzzle of the brain was simple enough for us to understand, then we would be inherently stupid! The question of ‘soul’. But what if, in the far distant future, computers design more powerful computers until several generations later a computer can build a model of the brain. Let us say that this brain is identical to the real thing in capacity, and speed. Would it be conscious? Would it work? Hypothetically I’ve solved the engineering problem of the brain, but when I power it up in my lab, it does nothing. Apart from the physical differences with the organic brain and my silicon hybrid, there is this ‘magical’ property of consciousness. Cartesian dualists divide the universe into two distinct entities, material entities; all that we can see, touch and hear or measure. Then there is the immaterial entity of mind that cannot be measured or seen. As a software engineer, I don’t like things I can’t qualify or quantify, but in all cultures throughout the world, there is some equivalent of ‘soul’, as an immaterial entity. Science cannot prove, or disprove its existence, as there are many cases of phenomena reported. Patients that have died on the operating table and then been brought back by surgeons have reported ‘out of body’ experiences, where they have been able to recall the surgeon’s conversation and actions, even though there is no brain activity that can be measured. Also a body at level 3 coma, or what a doctor would call “a dead person”, from a physical, engineering point of view, has absolutely nothing wrong with it or any damage to the brain. It is possible however, for patients to recover from this state, yet surgeons could not show you with a CAT scan, radioactive dye or any other method, what has changed, to make them become conscious. This lends weight to the Weak AI argument, as even if you build a complete simulation of a brain, it may never wake up, because it has no ‘mind’. Is there a unique entity called the ‘mind’. It may just be though, that the concept of ‘mind’ is inevitable in a hugely complex system. In Neuromancer (William Gibson 1982) there is a vast network called ‘cyberspace’, the lead character encounters in this space, a phenomena ( known as angels ), he believes it to be alive and conscious, but not human. These angels, or ghosts had been spontaneously created in the vastness of cyberspace. Are we just the ‘ghost’ in the machine ourselves? This is still science fiction, but there was a time when we thought of the creation of life being something very special and rare with exacting requirements. Now it seems that life forms will thrive almost anywhere no matter how hostile the environment. Recently huge colonies of shellfish and molluscs were found clinging to undersea thermal vents, where temperatures were in excess of 100 Celsius and the water was full of poisonous sulphur, most scientists would have told you that life was not possible under those conditions. It would seem that given a few basic elements, life is not only possible, it is inevitable, so maybe the same can be said for consciousness. Can a computer feel happy? Ignoring the problem of the ‘mind’ being a separate entity, lets say I’ve built the first conscious computer and it was ‘born’ yesterday. What of its thoughts, is it thinking like me? Of course not, every thought I have is from my own perspective or reference point. It is this unique reference point, which makes us who we are. This may be set as we are growing up and change over the years but we all make assumptions based on experience. My newly created computer does not have any of these references; it only knows what it has been told, in absolute terms. It couldn’t associate with “smelly” or “pain”. It would have trouble picking a favourite colour, as they are all equal to it. The idea that one is better than the other can only be established if some kind of reference is in place. To make a computer have an opinion it must learn to experience things and decide for itself what is “good” and what is “bad”. Even if my computer was conscious, its sense of reality would be completely alien to us, it would ‘feel’ electricity, it might have a sensation of its processor working hard or at rest. What would connecting to a large network ‘feel’ like. It may tell you its not in the mood to process the sales data, as it could really do with having its “D:” drive defragmented! These are not thoughts that anyone would have, so a computer simulation, would not have human thought, but it would be a phenomena in its own right. There is no complete definition of artificial intelligence, but most definitions mention some reference to human ability. Just because a computer did not ‘think’ like a human, would not mean that it didn’t ‘think’ at all. My computer could ‘feel’ happy, but not in the way any human would experience it, as a large part of the feeling ‘happy’ is physiological. The lack of any physical reward or motivation in a computer system will always make it ‘thoughts’ alien to our own. Mind and Body This lack of a body in current AI research has made Kosara.net raise the issue that the research is in the wrong direction. Robert Kosara (2004) points out that those pursuing AI, while denying the existence of an immaterial soul that is independent of the body, only the soul is taken into account, but not the effects of the body. This is important in that almost every single one of our actions is driven by the need to satisfy physiological urges. To build an artificial mind without a body, or even the concept of a body, would not create a mind that bore any resemblance to human thought. This type of mind would never understand humans, and we would never understand it. The need to sleep, to eat or the fear of injury, forms a part of our daily lives. For true AI, the artificial mind must understand these things. Another mistake for current research into AI is the desire to produce expert systems. These are currently the flagship of Strong AI, as a practical application of theory. To be of any use an artificial mind must be able to learn. The world changes all the time and one of the most amazing features of any organic brain (including animals), is their ability to adapt to any new situation, to learn new skills and put them to use. Rather than create an artificial expert system, would it not be better to create an artificial child, that had the ability to learn by itself and adapt to the real world. I haven’t seen a program yet that could simulate the mind of a 5 year old child, let alone the mind of an expert. Expert Systems The current expert systems it would seem are misleading in the title. Like all current computer systems ( and unlike the brain ), expert systems are based on a set of rules. The rules are generated by interviewing experts in the field of knowledge one is trying to emulate. But it has been discovered that the experts themselves, do not follow rules as such, but develop an intuitive approach to the problem, based on experience and known outcomes. Following the rules will only give rise to a competence similar to an advanced beginner. For example a flight instructor of many years experience was training cadets to scan their instruments in the correct sequence. The students performed in a flight simulator that, using a laser could measure where their eyes were looking at anytime. They were asked to fly the plane as normal and their performance monitored. The students scored well, but when the expert was asked to do the same test, it was discovered that he wasn’t obeying his own system for scanning the instruments. In fact, there seemed to be no system or recognisable pattern at all. The students, if they followed the expert system blindly would never reach the level of an expert. ( Hubert and Stuart Dreyfus 1991 ) mentions that airplane pilots report that whilst as novices, they felt they were flying their planes, but as experts, they simply experience flying itself. The intuitive knowledge that an expert has, is learnt through experience. The knowledge of what previously worked, what made things worse, are instantly to hand. Now if a computer system is to become expert it too must develop intuition. This can only come through learning, a computer that is given a fixed set of rules that are unbending, would only achieve the competence of the advanced beginner. Are we computers? This ability to learn, rather than be created with a fixed set of rules, is fundamental to the human brain. Where strict rules are concerned, then a computer can outperform a man. Take mathematics, there is no subjectivity involved in the answer to 2 plus 2. But does our mind calculate the answer in the same way as a computer? It would seem not! To teach a computer to add or subtract, you only need give it the set of rules once. It will then flawlessly adapt them to each situation. A human would have to practise, until the knowledge “sinks in” and becomes intuitive. Very different from the way a computer performs. Actually, if our minds were like computers, drill and practice would be completely unnecessary. The fact that even brilliant students need to practice when learning subtraction suggests that the human brain does not operate like a computer. (Hubert and Stuart Dreyfus, 1991). If the brain was to be accurately modelled and it could learn, would it develop intuition? This is a possibility; as if all the rules and processes were successfully modelled then the behaviour would be comparable. It is possible that one day a believable model of the psychological functions of the brain will be achieved. This model may be the breakthrough AI researchers are looking for. The more the parent theory is psychologically sound, the more likely that its artificial offspring will be genuinely illuminating. (Margaret A. Boden 1991a) Reference List Hubert and Stuart Dreyfus (1991) Minds and Machines: The AI Debate. In Tom Forester (Ed.), Computers in the Human Context (2nd ed. pp. 125-144). Cambridge, Massachusetts: The MIT Press. John Searle (1999a) Mind Language and Society (2nd ed.) London: Weidenfeld and Nicholson. p47. Margaret A. Boden (1990a) Artificial Intelligence and Natural Man (2nd ed.) London: The MIT Press. p403. Margaret A. Boden (1990b) Artificial Intelligence and Natural Man (2nd ed.) London: The MIT Press. p420. Phillip K Dick (1968) Do androids dream of electric sheep? Chicago: Del Ray John Carpenter (1974) Dark Star EPA International: Freemantle Media Robert Kosara (2004) www.kosara.net William Gibson (1984) Neuromancer Mass Market Paperback: Voyager |
| ||||||||||
Copyright © 2004 Tacet Blue. All rights reserved. |