Hints from Life to AI, edited by Ugur HALICI, METU, 1994 ã

 

 


 from brain to mind;

what clues from the natural world can tell us


David Davenport

Computer Eng. & Information Sciences Dept.,

Bilkent University, Ankara 06533 - Turkey.

david@bilkent.edu.tr

 


The human brain is incredibly complex. Understanding how it functions and how a conscious self-aware being emerges from its mass of gray matter, is one of the last great challenges of our age. Evidence for its mechanism has been accumulating under the banner of cognitive science for many years. From such diverse disciplines as philosophy, medicine, psychology, computer science and more recently neurobiology, a wealth of new and exciting discoveries are providing ever more clues to the inner workings of this most mysterious of objects. Models based on these observations have already proved valuable in developing artificially intelligent solutions to practical engineering problems. Yet, despite these successes and despite the mass of evidence, our understanding of the relationship between mind and brain remains sketchy at best. Indeed, the biggest stumbling block is perhaps not the lack of clues, but the lack of a suitable framework within which to organise them. This paper reviews some of the available evidence, both to show what we do know and to illustrate the difficulties involved in knowing more.

 

1. Introduction

 

The human brain is the most complex structure in the known universe. Understanding how it functions, how it makes sense of the world around it and, above all, how a conscious self-aware being emerges from its mass of gray matter, is one of the last great challenges of our age. It is a problem which has exercised and confounded some of the greatest thinkers through history, from Plato to Decartes, to Russell, Wittgenstein and Quine. Nor  should we forget the psychologists, the linguists, the computer engineers, the clinicians and the neuroscientists, who have also brought their very special skills to bare.

 

The brain is yielding its secrets, albeit reluctantly, as the scientific method delves ever deeper into its mysteries. We may still be a long way from establishing a link from brain to mind, but gradually, the Dualist, Divine and Magical explanations of old, are being banished in favour of a more materialist understanding. This knowledge may not come for free, however, for, if we succeed, we risk replacing that intangible human spirit with a mindless mechanism. So why, then, do we expend such effort? In part, of course, it is human nature to question and to explore, and there can be no greater intellectual challenge than that which understanding ourselves presents. But there are more practical considerations too, for an appreciation of the workings of our brains may provide valuable insights into the treatment of various physical and mental ailments. On a more commercial note, such knowledge can also enable us to build better, more sophisticated, machines. Artificial Intelligence (AI) research has already demonstrated the utility of copying and even improving upon nature's designs. A better understanding of the functioning of, and the relationship between, mind and brain, can only lead to further improvements.

 

In fact, we know a great deal about the brain. Our intimate personal attachment to it means that we have first hand knowledge of many of its strengths and weaknesses. We are aware of its prodigious memory, capable of remembering minute details and arranging them in appropriate ways, all without any apparent effort. We are aware also, that sometimes we are unable to remember the most obvious or important of facts, however hard we try. Yet at other times, despite being aware that we do indeed know something, we fail to recall it when we want to, although we are able to remember it clearly at a later time or when given an appropriate hint. We are aware of our incredible ability to recognise people and things that we have seen before, however briefly. We are aware of our creative and linguistic abilities, and of our ability to learn new skills and to adapt to new conditions.

 

To this first hand knowledge, we can add that from the fields of linguistics, psychology, neurobiology, philosophy and computer science, where new and exciting discoveries continue to provide ever more clues to the inner workings of the mind. We now know that the brain is not the amorphous mass of gray matter it once appeared to be. It has a complex internal structure composed of cells called neurons. These neural cells have many inputs, called dendrites, but generally only a single output, called the axon. The axon of one cell connects to the dendrites of others via a junction known as a synapse, see Figure 1. Neuroscientists have built up an incredibly detailed picture of the biological and chemical mechanisms involved in the neuron, its interconnections, and, above all, in the synaptic junction itself. Unfortunately, discussion of this work is beyond the scope of this paper, the interested reader is referred to the many standard works on this topic, e.g. [1,2].

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 1.  Building blocks of the brain, neurons, axons, dendrites and synapses

(from [6])

 

Clearly then, we have a mountain of information about the brain, from the behavioural level down to the molecular level. Yet, for all the clues at our disposal, there is one thing we apparently still do not know. Incredible as it may seem, we really don't know how the brain/mind works! How is it possible to know so much and yet so little? Why can we not piece together a proper understanding, when we seem to have so much information available? Why is it so difficult? These are the questions which this paper attempts to answer. To do so, we will first look at what would constitute an understanding or explanation, and how we would actually go about achieving this. We then review some of the clues uncovered thus far, both to illustrate how difficult and error-prone the process of discovery can be, and to give some hint at what we do currently know about the brain. We conclude by showing how the current models of mind apparently fail to provide a suitable framework into which to organise our knowledge.

 

2. Models and Explanations

 

Why do we still have only a vague understanding of the way the mind functions, despite an apparent mass of information? Why is it so difficult to piece together a complete picture of that with which we have so intimate a relationship? To see the problem, it is necessary to appreciate just what we mean when we say we understand a complex system. Having got an idea of the form our understanding will take, we can begin to see why the process by which it is acquired is so full of pitfalls.

 

Models are the key to understanding the functioning of complex systems. We build models as an aid to comprehension. Most often this involves constructing an abstract view of a system, one which omits much of the intricate detail. In the extreme, a simplified model would treat a system simply as a "black box", whose output is some function of its inputs and related internal states (which may themselves be a function of previous inputs). This straightforward mapping of inputs to outputs constitutes an essentially "behavioural" description of the system. In fact, we frequently retain some of the internal causal structure of the system in our simplified model. Such intermediate-level models can be viewed as a set of even simpler black box models, together with appropriate causal connections between them. Employing such abstract descriptions (models) proves advantageous because of our limited cognitive abilities. We find it very difficult to remember and use large numbers of unorganised facts, such as would comprise a fully detailed description of a complex system. Simplified models have other benefits, too, especially in terms of allowing faster predictions to be made and of reducing the amount of time required to learn about a system.

 

We can obtain more abstract models in two ways, either by collapsing parts of the system into black box models or by reducing the precision with which we describe the system. Conceptually, we could collapse any sub-section of the system to a black box, although it will usually be convenient to divide it up along causal lines. Reducing the number of components and interconnections in this way has obvious effects on the amount of computation needed when using the model and may well result in fewer internal state being required. Precision, on the other hand, relates to the amount of detail with which inputs, outputs and internal states are represented, such as, present/absent, low/normal/high, a real value between 5 and 5000, etc. Reducing precision helps by lowering the number of input/output mappings to be considered when using the model to make predictions. This is particularly important if the model must be used "bidirectionally", that is, when computing outputs given inputs or computing inputs given outputs. Notice that precision is not the same as accuracy, which relates to the system's ability to make correct predictions. Hence, it is possible that an imprecise model may make accurate (right) predictions and a precise model inaccurate ones. The trick, of course, is to build an accurate model which has a level of precision appropriate to the purpose at hand. In practice, it is often convenient to retain a hierarchy of models each with varying degrees of precision and internal structure. We can then select the most suitable model depending on the circumstances at the time, paying attention to the mapping between the various models and the actual system.

 

Now, consider a situation in which we have a model at a particular level of description, but we wish to improve it. It may be that the model is not sufficiently detailed for our purposes, or perhaps, it is not completely accurate. Obviously, it would be relatively easy to start from the original system and derive another model, which is more accurate or detailed than the one we currently possess. But, what if we don't understand or know any more about the actual system than the information contained in the present model? How can we possibly create a better model when we have no idea of what it should be like? This may sound like an extremely unlikely scenario, but it is not. It is precisely the sort of situation we face when we wish to improve a model which represents our current knowledge of some natural phenomena. The scientific quest for understanding is exactly this process of creating ever better models of the natural world.

 

Improving a model can be done in three ways, by increasing precision, by taking into account previously neglected inputs, or by discovering more of the system's internal structure. As discussed before, our models are either black boxes or are composed of them. In this case, however, the input-output behaviour is usually determined as the result of observation rather than abstraction. In fact, it is very often attempts to confirm a model's accuracy through closer observations, perhaps in different contexts, that reveal its shortcomings in the first place. When this occurs it is necessary to postulate more appropriate state variables and/or structure. Unfortunately, this is a very difficult and error-prone task, hence, having once selected a 'new' model, it must be put to the test. Confirming a model usually involves a search for supporting evidence. It is generally recognised that, for a model to be accepted, it should not only account for all of the existing observations, but also predict as yet unobserved effects. The best confirmation of a model's accuracy thus comes from experiments which reveal the existence of these hitherto unseen phenomena. Actually, in practice, most theories are unable to account even for the known evidence. For this reason, a model is considered more acceptable if evidence of its components can be confirmed by multiple independent means. For example, if a certain I/O mapping necessitates a causal pathway being postulated between two subsystems and there is evidence of an actual physical link, severance of which produces the expected dysfunctional behaviour. Of course, none of this can ever prove the new model is correct or even the best or most detailed model possible, but it does lend credence to it.

 

While it is impossible to prove a model correct, a single piece of evidence could prove it wrong or, at least, in need of improvement. Unfortunately, human nature is such that it often affords more weight to positive supporting evidence, with the result that potentially significant negative evidence tends to be overlooked. On occasion, this may actually be a preferable, for, if we were to try to solve the entire puzzle at once, we would get totally confused. Thus, scientific understanding must progress in steps, each aiming to be more accurate and complete than its predecessor. When agreeing to overlook some evidence, however, we must be careful, lest we choose to ignore the wrong clues and end up constructing a misleading model, one that has little or no basis in reality. Overthrowing established but invalid theories of the world is remarkably difficult. This is partly due to the fact that investigations are performed within the scope of the prevailing model and hence tend to be biased towards locating corroborating evidence and partly because of the natural reluctance to discard prevailing ideas and start all over again. Rather, when a relatively well established model is found wanting, it tends to be "patched" by appending "special cases" to cover the exceptions. Only when the number of special cases becomes overwhelming, is the original model thrown into question and the search for a new one begins.

 

The following section reviews some of the clues which have been accumulated in the search for a model of mind. It is organised so as to try to build up a picture of cognitive process starting with basic questions of how memory functions, through architectural features and development, to the question of the relation of mind to the brain itself. In the course of this journey, we will see many illustrations of the sorts of mistakes described above, indicating just how difficult and error-prone the process of scientific modeling can be.

 

3. Some Clues

 

It is almost impossible to provide a complete review of the mountain of work which constitutes the study of the mind, now going back hundreds or even thousands of years. Indeed, such is the volume of material that it is impossible to do more than scratch the surface of work done even very recently. The following selection thus makes no claims to be comprehensive or even to be right, for, in the light of the preceding discussion, to do so would be folly.

 

Cognition, in essence, is the ability of an agent to detect and store information about itself, its environment, and its interactions with the environment, and to subsequently use this stored knowledge when deciding upon future actions. This view is predicated on the assumption that there is a certain regularity to the world and that knowledge of which actions have proved successful or unsuccessful in similar situations in the past, can thus help when selecting the most appropriate course of action. Memory, therefore, is central to the whole cognitive process and any attempt to explain human cognition must offer a vision of how the mind represents the world and how it comes to acquire this representation.

 

The neural basis of cognition was first uncovered by Cajal in 1891, yet, it was only in the 1940's and 50's that many of the details of its functioning really began to emerge. One of the most important questions which required answering concerned the number of neurons needed to store each 'concept.' At one extreme the compact or punctuate view suggests that one neuron per concept is sufficient, while at the other extreme the diffuse view holds that each concept is stored as a pattern of activity across all of the neurons in the brain. Arguments against the punctuate model focus on the fact that a large number of neurons die each day. Accordingly, we should expect to lose at least some concepts (at random) every day, which we clearly do not. Other evidence arrayed against the compact view is the observation that a small stimulus gives rise to a large amount of neural activity and that, theoretically, there are simply not enough neurons to have one per concept anyway. However, perhaps the most significant factor arguing against compact models and for diffuse models, was Lashley's now classic work with lesioned rats [3]. Despite having large portions of their cortex removed, the rats continued to show good maze running performance. While the diffuse model has undoubted advantages in terms of fault-tolerance and generalisation, it also presents extremely serious theoretical flaws. The basic difficulties concern cross-talk, communication and the inability to capture structure. If each concept is represented as a pattern of activity across all neurons, then trying to consider two concepts at the same time will result in overlap and confusion [4].

 

In fact, more recent work has established that remarkably compact cerebella structures do exist, so that there must be another explanation for Lashley's results [5]. Moreover, individual 'command' neurons have been found [6], as have neurons in male zebra finches which respond only to the song of that particular bird's father, not those of other males of the same species or to any other pure tones [6]. Experiments conducted on patients undergoing brain surgery (during which the patient can remain conscious, since there are no pain sensors in the brain) show that stimulating individual neurons usually produces very specific sensations. From the foregoing arguments it should be apparent that the brain seems to use a very compact means of representation afterall. In fact, the arguments forwarded against the  compact model can be easily overcome. For example, the problem due to neuron death is easily avoided if each 'concept' is represented by several neurons rather than just one, so that the chances of losing all of them is very small indeed.

 

While it seems certain that the brain uses compact storage, there seems to be no clear-cut evidence to show whether the it uses a prototype or instance model of concept storage, indeed there is a suggestion that it employs both together. A pure prototype scheme would not store the individual instances of a concept but, rather, just some 'average' of them. Obviously, this results in considerably lower storage requirements, but alas, it throws away information about the variability of categories, information which people appear to retain. Another difficulty concerns concept acquisition. How, for example, is the prototype adjusted as new instances are observed, particularly in the early stages of concept formation?  In fact, once a concept has been established, it seems easier to remember atypical instances rather than focal ones which should match the prototype better.

 

Interestingly, there is good evidence that recognition and remembering are separate processes. In one experiment, subjects were shown a sequence of pictures and asked to identify each as being a member of a specified category or not. A subsequent test in which the subjects were shown pictures and asked if they had been part of the original test sequence, showed very poor recall. The conclusion was that, although only a few tens of milliseconds were required to correctly match the picture with a category, several hundred milliseconds were needed to actually record (remember) an individual picture. Of course, there are lots of other factors affecting memory, in particular, repetition. The fascinating interplay of recognition and memory is amply illustrated by the common occurrence of noticing a car number plate. If you see a car having a personalised number plate, e.g. "The King", you would almost certainly remember it, whereas, if you saw a normal license number, you would probably remember only a little bit of it, if any. Yet, if you saw an Arabic number plate, although you would undoubtedly remember that you had seen such an unusual license number, you would probably be unable to recall any detail of it!

 

Cognitive psychology has uncovered many clues as to the nature of the recognition process. For example, in an effect known as "word superiority", a briefly presented single letter is recognised more accurately when it is alone or part of a word, than when it is part of an unfamiliar letter string [7]. The key to this effect is a 'mask' made up of simple lines and shapes, which replaces the letter/word after a short period, say 60ms. The theory is that recognition requires a hierarchy in which low- level feature-analyzers, pass their results through to letter- analyzers which in turn pass the results of their analyses on to word and, eventually, concept-analyzers. Replacing the target letter with the mask stops further low-level processing immediately, while the higher levels can continue slightly longer. Using a mask which does not redirect the low-level detectors obliterates the effect.

 

 

Figure 2.  An example of Leeper's degraded pictures

 

Another indication of the interplay between various 'levels' can be seen in an experiment by Neely [8]. It demonstrates that a word is recognised more rapidly if it is preceded by a priming word that is related in meaning, than if it is preceded by an unrelated word. Moreover, recognition can actually be hindered if a related but unexpected word is used as the prime. Again a mask is used, this time so that the word is 'seen' for only 10ms. The experiment suggests that even a very brief presentation of a word can activate the meaning of a related word. One final example of top-down bottom-up processing is demonstrated by the degraded figures used by Leeper [9], see Figure 2. Identifying the picture usually proves rather difficult, yet, if you are told what to expect, it suddenly becomes very obvious (for solution see [9] in the reference section).

 

In a certain sense, these observations may be open to question, since they involve the interaction of many complex and, as yet, only vaguely understood features of the mind. There are, however, some very well known clues which may relate to the implementation level more directly. The first such hint is provided by illusions such as those of Figure 3.

Figure 3.  Classic illusions which may provide clues to brain functioning

 

You know there is no complete triangle or square, but the expectation remains. This clearly demonstrates the same sort of processes we saw above, but this time without the additional complication of language. Notice, that this hints also, at the same basic mechanisms being used in all stages of the cognitive process. Another hint is provided by pictures which 'flip' between two possible interpretations. The "Necker cube" and the "old man, young woman" are prime examples, the rabbit in Figure 4 is another (turn the page sideways to see a different animal!). What they seem to indicate is the existence of a very low-level winner-take-all mechanism.

 

Another hint, although more difficult to decipher, may be offered by the effect seen when a plain colour is replaced by white. In such situations, people generally observe a faint trace of the original shape, but in the complementary colour. Is this perhaps an overshoot from a resettling of the winner-take-all effect? Whatever may be the case, there is one more potentially important hint that is generally completely overlooked. As the title of Boole's classic book 'The Laws of Thought' quite clearly announces, a major clue is provided by logic itself. Logic is an abstraction of human reasoning, but one which concerns itself only with right reasoning. As Dennett observed [10], logical behaviour is the hallmark of an intentional agent, illogical agents simply will not survive. We do, on the whole, reason logically, however, we also display some characteristically illogical traits, such as 'denying-the-antecedent' and  'asserting-the-consequent'. What we need then is a mechanism which is essentially logical, but which exhibits the sort of lapses of reasoning which we ourselves so often do. One possibility would be to remain within the deductive framework and somehow suppress the valid inferences, see [11]. A more radical option, and one which would appear to offer more hope, would be to switch from a deductive to an abductive framework [12]. In any case, since logic is so deeply engrained in our being, one may expect these reasoning methods to be found at the neural level.

Figure 4.  Another clue?  Turn the page sideways..

 

At this point, we might ask whether recent developments in neurobiology might not shed more light on the subject. As we indicated in the introduction, there is now a wealth of minute detail regarding the functioning of the brain and of the synapses in particular. Unfortunately, it is far from clear how all this knowledge helps and how it fits into the overall picture. The difficulty arises mainly because we have not yet managed to find an appropriate mapping from the biological, let alone the molecular level, to the level of knowledge and thoughts. An analogy may make the problem clearer. Suppose you were given a piece of equipment constructed from transistors, together with a detailed description of the workings of the transistors themselves. Would this be sufficient for you to discover how the equipment works? Probably not, for one thing understanding how each component works gives no hint as to what the machine does, it could be a radio, a television, a washing machine or a computer. Moreover, even knowing what behaviour the equipment is supposed to exhibit, does not necessarily help understand its inner workings, for it may be implemented in any number of different ways. You may not even be sure what the basic components are, for example, are they the soldered joints, the integrated circuits or the "transistors" inside them? Recent work in neuroscience has suggested that the basic building blocks of the brain may, in fact, be the synapses not the neuron [21]. Anyway, even if you can correctly identify what counts as a basic component, knowing how it functions doesn't help, for, unless you know what information is being 'processed', it is impossible to explain the higher-level behaviour. Deciding what constitutes the information and what its content is, is very difficult. Is the information encoded in, for example, the voltage, the current, the pulse width, the frequency, a digital code or the phase. And what is the information, is it sound, picture, synchronisation, teletext, control or what? The apparent mountain of molecular and chemical evidence, then, is relatively worthless until it can be placed in an appropriate higher-level context.

 

What is of much more import, however, is the knowledge of the structural characteristics of the brain which have been painstakingly pieced together. The most detailed evidence relates to the visual system and results partly from experiments on animals and partly from human patients suffering from a variety of brain tumors. Interpreting this evidence, however, is not always as straightforward as it might seem [13]. As an example, a condition known as prosopagnosia, in which the subject is unable to recognise faces, would seem to suggest the existence of a functional component dedicated to face recognition. In fact, recent studies have shown that patient's deficits are usually more extensive, so that such a conclusion is unwarranted.

 

Despite the pitfalls, we do have a reasonably good understanding of how information from the eyes is processed within the visual cortex. Surprisingly, signals for colour, dynamic form, motion, and forms with colour, are all processed in parallel and along relatively independent paths [14]. Figure 5 shows the basic organisation. Input from the retina arrives at layer V1 and is then relayed via layer V2 to layers V3, V4 and V5. There are also direct connections from cells in layer 4B of V1 to V3 and V5. Lesions in each of these areas produce distinct pathologies. For example, patients with lesions only in V4 view the world in shades of gray, while those with lesions in V5 can see objects which are stationary, but not ones that are in motion. Some patients suffering from severe carbon monoxide poisoning can end up having only the very limited colour information which part of V1 still provides to them, having lost most of V1, V2, V3, V4 and V5. Without any knowledge of form, they are forced to guess what an object is, based solely on its colour and may, for example, misidentify all blue objects as 'ocean'. Layer V1 is obviously critical for vision and any damage may leave the patient either completely blind or with an inability to comprehend even simple shapes. If V1 is still intact, a patient may be able to reproduce a drawing, but, because of other damage, be unable to comprehend what it is a picture of. Interestingly, in such cases, the patient may well be able to name an object by touch or smell, but be unable to identify it by sight.

 

Clearly, then, vision is a very complex process, only the first stages of which are explained above. Additional processes are needed to achieve real spatial understanding, for, when we view the world we see only a very small part of it at any one instant. When reading, for example, we obviously do not perceive the entire page at once, but rather move our eyes from word to word. Dennett [15, p361] recounts an experiment in which a subject reads text from a computer screen. However, by means of an eye tracking system, the computer can determine on which word the eye will settle next and can change that word before the eye actually reaches it. To onlookers the screen is in a continuous state of change, to the subject, however, it appears completely normal. Similarly then, when we look at someone's face, for example, we do not immediately comprehend it in its entirety. Rather, we 'scan' it, seemingly at random, picking out the location and form of its major components and somehow piecing them together until we achieve recognition.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 5.  Organisation of the visual cortex (from [14])

 

Another area which has received significant attention is that of language. Language is a distinctly human phenomena, thus, until recently, research has had to rely almost entirely on nature's experiments, in the form of lesions, to help unravel its mysteries. The development of non-invasive techniques has provided much needed confirmation of earlier theories. Magnetic resonance imaging (MRI), for example, allows the exact location of lesions to be determined. This technique has been able to show that specific dysfunctions are always associated with the same specific regions of the brain. Positron emission tomography( PET), on the other hand, has enabled the mapping, in healthy individuals, of brain function to location. This enables us to see normal brain activity while performing a variety of tasks (Figure 6.) These dual sources of evidence have helped build up a reasonably good picture of the various subsystems which cooperate to give us our linguistic abilities.

 

In essence language appears to involve three sets of structures [16]. The first deals with non-language interaction between the body and its environment, creating representations for form, colour actions, etc. The second deals with the components of speech, phonemes, phoneme combinations and rules of syntax. On the one hand, it allows sentences to be generated and spoken or written, and, on the other, it performs the initial processing for speech understanding. The third set of structures mediates between the other two, either taking a concept and causing an appropriate word or sentence to be generated, or receiving words and evoking the corresponding concepts.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 6.  PET scans showing brain activity during various tasks (from [6])

 

 

Lesions in each of these structures display specific pathologies. For example, patients may continue to experience colour normally but be unable to name them correctly. Alternatively, they may produce phonemically distorted names, such as 'buh' for 'blue'. In other cases, patients may substitute a more general word for the missing one ('people' for 'woman') or use a semantically related word ('headman' for 'president'). Besides these deficits, they speak in a perfectly normal manner. Lesions in the anterior and midtemporal cortices (which handle word selection and sentence generation) produce slightly different symptoms. Sometimes they will result in an inability to recall common nouns and proper nouns. Such a patient, would usually be unable to name friends, relatives, celebrities or places. Shown a picture of Marilyn Monroe, although they could not name her, they would most definitely recognise her and be able to retrieve additional information, such as, that she was in the movies, had a affair with the president, etc. While such patients speak normally, they tend to substitute very general words like 'thing' or 'stuff, 'it' or 'she or 'they', for the missing nouns and pronouns. In contrast, patients with left frontal damage have far more difficulty recalling verbs and functors. Since these constitute the core of syntactic structure, it is not surprising that such patients also have trouble producing grammatically correct sentences.

 

These studies show that the brain has distinct regions for the various components of language generation and understanding. The functional separation of storage for phonemes, for proper and common nouns, for verbs and syntax, offer significant clues as to the origin of our language abilities. Still, it is very important to be cautious in our deductions. For example, evidence has also shown that in some bilingual individuals, words for different languages are found in distinct regions of the brain; however, we should presumably not conclude that the brain has evolved a specialised component to process each unique language! Another example concerns more general aspects of memory. Psychology has long distinguished between so called short term, or working memory, which acts as temporary storage during the recognition process, and long term memory which retains known facts. The basic assumption is that we first store 'things' in short term memory and only later, if they have not been forgotten, are they transferred to long term memory for use on later occasions. Recent work in neuroscience seems to provide confirmation for this idea, suggesting that short-term memory may be located in the prefrontal lobes of the cerebral cortex [17]. We should be wary of jumping to such a conclusion, however, for both the original conception and the supporting evidence are based on an information processing view of cognition, a view which may itself be suspect. It is difficult, for example, to reconcile this conception with connectionist-like neural storage representations.

 

Perhaps one of the most intriguing findings in recent times concerns brain development [18]. It was originally thought that the brain's wiring was genetically determined, however, new evidence shows that this is only part of the story. We are born with all the neurons we will ever have, approximately 100 billion of them, of which about 100 thousand die each day. Although this may sound a lot it is comparatively small, for example, one would have lost only about 2.5% of neural cells after 70 years. While the neurons themselves do not really change, their interconnections most certainly do. Axon and dendrite growth account for the large increase in brain weight following birth. Recent research has shown that, while genes determine the general region in which connections will be made, the final location depends on neural activity. Lack of such activity can seriously impede development. Thus, infants who spend most of their first year in cribs develop abnormally slowly, some cannot sit up after 21 months and less than 15% are able to walk after 3 years. In fact, axons and dendrites appear to be "plastic", growing and shrinking continuously in response to neural excitation, throughout a person's life. These observations seem to be ignored in current computational models of brain function, but they may yet prove to have some deep significance in the overall scheme of things.

 

Finally, the thorny question of mind: Do the sort of brain features we have considered here provide a link, a stepping stone, to explaining emotions, feelings, consciousness, awareness, intention, self? Obviously, it is impossible to give a definite answer to this; yet, one by one the barriers are falling. Not so long ago, most people would have thought that language, in particular, was something uniquely human, yet slowly we are unfolding a rational (non-magical) picture of its functioning. In fact, mind is not so much a thing as a process or a set of abstract processes. Two brief observations will serve to illustrate the apparent physical basis of these processes. First is the well known fact that certain drugs affect one's mind. They can cause or control depression, can stop pain and can simulate thought. Secondly, are experiments which investigate intention. Researchers have found neural activity just prior to and correlated with the initiation of some action. In one experiment [19], subjects were asked to watch a slide show and were given a button to push when they wished to view the next slide in the sequence. The button, however, did not really control the slide projector, but rather, electrodes attached to the subject detected their 'intention' to change the slide and initiated the change before they had actually pressed the button. Subjects reported that, just as they decided to press the button, they saw the slide change, though, they were unable to stop their finger pressing the button anyway.

 

The interplay between consciousness, awareness and intention is still very much the realm of philosophy. Probably the clearest explanation to-date has been offered by Dennett [15], who demolishes prevalent views of mind wherein everything, every thought, must come to some central stage for conscious consideration (what he calls the Cartesian Theater). According to Dennett, not only is there no central stage, but there is no sharp dividing line between conscious and unconscious thought.

 

 

4. Concluding Remarks

 

The preceding sections both reviewed some of our current ideas about the functioning of the brain and indicated why developing such ideas was a particularly risky task. It should be obvious that there is a lot of very interesting work going on and that there is certainly no lack of clues to help us in our quest for understanding. Indeed the problem is not so much a lack of evidence, as lack of a suitable framework into which to place it all [20]. We seem to be in desperate need of a model, or hierarchy of models, which can provide us with an overview of the entire cognitive process. Such a model, if accurate, would act both to interpret the available evidence and to guide the search for a deeper understanding. Only with such a model can we expect progress in cognitive science to match that in biology, chemistry and physics.

 

Are there any models which might provide a suitable framework? To the author's knowledge, there are no really good candidates. Both the major computational paradigms fall short of our requirements. The symbolic paradigm is undoubtedly able to model any aspect of cognition, but it fails to offer any real insights. In essence, you get out what you put in, mainly because symbolic models are simply unable to provide an implementation-level account of cognition. In contrast, the connectionist paradigm, being founded at just such a level might be thought to offer more hope. Indeed, the artificial neural network is generally assumed to be a very good approximation to the real thing, with connectionist models having demonstrated apparent solutions to many problems. Unfortunately, this success may be illusory for the sorts of networks used in these exemplar systems are often not as  plausible as they may at first sight appear to be. For one thing, they fail to account for many of the known facts, such as, synaptic outputs as well as inputs, neuromodulation and second messengers [21]. Artificial neural networks frequently employ biologically implausible mechanisms, such as back propagation. Moreover, they have been criticised on theoretical grounds. According to Fodor and Pylyshyn [22] they lack representational structure and are based on the discredited Associationalist philosophy. While some of the more recent work in this field has addressed the former problem (for a review see [23]), the latter one remains. Indeed, some connectionists [24] have even claimed that their paradigm can overcome the philosophical difficulties supposedly inherent to associationalism, but this seems unlikely.

 

While all this may appear to leave us without any viable model, there is at least one further possibility which seemingly both explains the available evidence and is philosophically plausible. In essence this model simply records "everything". Whenever it "sees something", however, it first attempts to find a match in its memory and records the results of this process rather than "raw" inputs (for a fuller description of how this might work see [25]). The only problematic aspect of this model concerns its biological implementation, which is somewhat at odds with current thinking in neurobiology. Rather than storing knowledge in the interconnection weights (synaptic strength), knowledge is considered to be inherent in the pattern of connections which are formed either during development or dynamically as we see and remember things. The fact that this alternative model fails to coincide with present understanding in this field, does not necessarily mean that the theory is wrong, of course. As we observed previously, evidence is sort for and interpreted within the context of an existing model. Thus, it is just possible that the clues might be open to a different interpretation, one which actually supports this alternative model.

 

If we are to make progress in our quest for understanding, we must remain open to, and indeed specifically search out, alternative proposals. It is only through trial and error, through the building and testing of models that we can gain deeper insights. The natural world offers us an endless supply of clues to the inner secrets of the mind, but it is up to us to select and interpret them, and to organise them into a meaningful framework.

 

References

 

1.    Shepherd, G.M. (1988) Neurobiology, Oxford University Press.

2.    Churchland, P.S. (1986) Neurophilosophy: Toward a Unified Science of the Mind-Brain, MIT Press.

3.    Lashley, K.S. (1950) "In search of the engram", Symposia of the Society for Experimental Biology, No.4, Physiological Mechanisms in Animal Behavior, Academic Press, pp.454-483. Referenced in [4].

4.    Feldman, J.A. (1990) "Computational Constraints on Higher Neural Representations", in Schwartz E.L. (ed), Computational Neuroscience, MIT Press.

5.    Thompson, R.F., (1986) "The neurobiology of Learning and Memory", Science 233, pp.941-947. Referenced in [4].

6.    Fischbach, G.D. (1993) "Mind and Brain", in Mind and Brain: Readings from Scientific American, W.H. Freeman, pp.2-14

7.    Glass, A.L. & Holyoak, K.J. (1986) Cognition 2nd edition, Random House p.27.

8.    Neely, J.H. (1977) "Semantic priming and retrieval from lexical memory: Role of inhibitionless spreading activation and limited capacity attention", Journal of Experimental Psychology: General, 106, pp.226-254. Referenced in Glass A.L. & Holyoak K.J. (1986) Cognition 2nd edition, Random House p.63.

9.    Leeper, R. (1935) "A Study of a neglected portion of the field of learning - the development of sensory organization", Journal of Genetic Psychology, 46, pp.41-75. Referenced in Glass A.L. & Holyoak K.J. (1986) Cognition 2nd edition, Random House, p.127. (Solution to Fig.2: A boy with a dog).

10.  Dennett, D.C. (1971) "Intensional Systems" in Journal of Philosophy, LXVIII, 4, pp.87-106, reprinted in Dennett, D.C. (1986) Brainstorms: Philosophical Essays on Mind and Psychology, Harvester Press Ltd. UK.

11.  Byrne, R.M.J. (1989) "Suppressing valid inferences with Conditionals", Cognition 31, pp.61-83.  (For possible refutation see Politzer G. & Braine M.D.S. (1991) "Responses to inconsistent premisses cannot count as suppression of valid inferences", Cognition 38, pp.103-108 )

12.  Davenport, D. (1993) "Intelligent Systems: The Weakest Link?" in Kaynak O., Honderd G. & Grant E. (eds), Intelligent Systems: Safety, Reliability and Maintainability Issues, NATO ASI Series F, Vol.114, Springer-Verlag, pp.60-73.

13.  Grobstein, P. (1990) "Strategies for Analyzing Complex Organization in the Nervous System: I. Lesion Experiments", in Schwartz E.L. (ed), Computational Neuroscience, MIT Press.

14.  Zeki, S. (1993) "The Visual Image in Mind and Brain", in Mind and Brain: Readings from Scientific American, W.H. Freeman, p.29- 39.

15.  Dennett, D. (1991) Consciousness Explained, Little & Brown Pub.

16.  Damasio, A.R. & Damasio, H. (1993) "Brain and Language", in Mind and Brain: Readings from Scientific American, W.H. Freeman, pp.54-65.

17.  Goldman-Rakic, P. (1993) "Working Memory and the Mind", in Mind and Brain: Readings from Scientific American, W.H. Freeman, pp.67- 77

18.  Shatz, C. (1993) "The Developing Brain", in Mind and Brain: Readings from Scientific American, W.H. Freeman, pp.15-26.

19.  Grey Walter's Precognitive Carousel (1963). Referenced in [15] p.167.

20.  Kosslyn, S.M. & van Kleeck, M. (1990) "Broken Brains and Normal Minds: Why Humpty Dumpty Needs a Skeleton", in Schwartz E.L. (ed), Computational Neuroscience, MIT Press.

21.  Shepherd, G.M. (1990) "The Significance of Real Neuron Architectures for Neural Network Simulations", in Schwartz E.L. (ed), Computational Neuroscience, MIT Press.

22.  Fodor J. & Pylyshyn Z. (1989) "Connectionism and cognitive architectures: A critical analysis", in Pinker S. & Mehler J. (eds), Connections and Symbols (A special issue of the journal Cognition), Bradford Books, MIT Press.

23.  Bletchley W. (1993) "Currents in Connectionism", in Minds and Machines 3, pp.125-153.

24.  Bechtel, W. & Abrahamsen, A.(1991) Connectionism and the Mind, Basil Blackwell Pub.

25.   Davenport D. (1993) "Inscriptors: Knowledge Representation for Cognition", in Proc. 8th Int. Symp. on Computer and Information Sciences, Istanbul.

  


contents                                     home