EXPLAINING THE NERVOUS SYSTEM IN TERMS OF COMPUTER PROGRAMMING AND THE OBJECT-CLASS ABSTRACTION

Brian D. Josephson
Cavendish Laboratory, Madingley Road, Cambridge CB3 0HE

© B D Josephson 2002

Introduction

This paper argues that the fact that the functioning of the brain still seems mysterious to us, notwithstanding the large amount of detailed information gained about it in recent years, is the consequence of ignoring an important possibility concerning the mechanisms underlying brain functioning. This is the possibility that the brain may make effective use of the powerful tools that have been developed over the years by workers in computer science for use in computer programs (a variant of the idea that the brain is a kind of computer, the focus in the present context being on the idea of a program, rather than on some machine that might execute a program). A program is a subtle way of describing the intended behaviour of a system, and in the usual computing context a piece of software known as a compiler performs the task of translating the programmer's source code into machine code that the computer can process directly. The present proposal is that source code might under certain circumstances be translatable into a nervous system architecture that could in the same way behave in accord with the directives of some source code.

Source-code control of a computation

Minsky (1987) has shown that a number of programmable cognitive processes can indeed be translated into neural circuitry as the present proposals demand. However, what we have in mind is the possibility of translating the equivalent of a complicated integrated computational system (such as might be required to give a complete account of behaviour controlled by the nervous system) into neural circuitry, not just the fragments of such software discussed by Minsky. Such a system consists of a complicated collection of interrelated processes rather than what might be considered simply as an algorithm. The source code prescribes not only the algorithms but also the ways in which the various processes relate to each other.

In the case of the computer, the compilation process (the process for translating source code into machine code) depends for its design on two things, firstly that the information in the computer has to represent accurately, according to some set of conventions, the entities in the source code, and secondly that the compiled machine code for processes specified by the source code must generate behaviour that accords, given the conventions as to representations, with the demands of the original source code. Achieving this is a conventional programming problem, and once this problem has been solved for a given programming language, the compilation process that has been invented will convert into corresponding machine code any source code that accords with the conventions stipulated by the language. One may say that the job of the compiler designer is to adapt the descriptions of the source code to the bare capabilities of the computer, while the programmer's job is to make the source code, which more resembles ordinary language, appropriate to the demands of the specific task with which he is concerned. In alternative language, the programmer designs the process while the compiler designer creates the implementation method.

We are now in a position to make contact with the brain problem, where the considerations above suggest we think of the problem as a combination of process and implementation, the process being defined in terms of some suitable language, and the implementation being implementation of the process specified in this way in terms of neural circuitry. In these terms, one requirement in order for this formulation to be of relevance to solving our problem is that the language be sufficiently expressive to be able to characterise sufficiently well the processes controlled by the nervous system, as well as satisfying the constraint of implementability. The concept of objects in classes, as used in object-oriented programming, goes a long way towards enabling the satisfaction of both conditions, as will be discussed.

Implementation issues for the brain

First, however, we discuss general matters of implementation. In accord with our discussion, implementation demands particular correspondences between constructs in the source code and structures in the nervous system. Existing experience with the nervous system leads one to expect these correspondences to be generally in accord with the following: neural signals correspond to transient information in a computation, specialised neural circuits to specific processes, and specific identifiers of a given type to specific neural systems (the latter being in essence the familiar representational character of systems in specific areas of the brain). Again, storage and retrieval of information, as per the source code, needs to be implemented by corresponding capacities in the nervous system.

Finally, we refer to a subtler kind of process, entering in connection with the discussion of neural analogues to object-oriented programming that will follow., namely the dynamic creation of new systems corresponding to objects of a given type typically, in the programming context, specified by a command in a language such as Java, which we use for our illustrations, such as "variable = new(type)" or "expression = new(type)". In the nervous system case, some mechanism must designate a suitable neural system to represent a new object of a given type, linking it to the system corresponding to the variable or to the expression that is equated to it in the source code, making it possible thereby to access the newly created system on future occasions by requoting, in the source code, the variable or expression concerned.

These general correspondences are, as an initial hypothesis, assumed to be hard-wired in to the design. Source code also includes expressions composed of sets of identifiers (qualified identifiers), and here we invoke specialised combinatorial neural circuitry, associating specific neural systems to the combination of systems corresponding to the components. Given these correspondences, it is then necessary that the neural circuits involved act in accord with the processes indicated in the source code. Neural networks can in addition perform some computations that are difficult to execute with conventional computing, for example parallel processing, and learning correlations. The source code should have the capacity to refer to such possibilities, for example by including a command to learn a correlation, and also to allow for variables representing a superposition of possibilities, rather like a fuzzy set. The question of how exactly such issues should be addressed is left for future investigation.

One process that is important in conventional programming does not transfer readily to the nervous system context, namely that of calling a process, and returning to the original process after the process has completed. In conventional computation, this is normally accomplished with a mechanism known as a stack, which is not very appropriate for our situation. In some cases, function calling with return can be accomplished using a flowchart equivalent, which can reasonably be emulated using neural circuitry, but a more powerful alternative involves creating a plan every time a function is called, the plan including both the function to be executed and a specification of the action that is to occur following the completion of the function.

One point must be noted at this point: in contrast with the optimistic view of certain neural constructivists (Quartz and Sejnowski 1997) that the specific details of the nervous system are not very important because the latter shapes itself under very general conditions to the demands of the environment, the present proposals demand a high degree of specificity in the nervous system architecture, dictated by the demands of their being mutually consistent representations of data and processes. Karmiloff-Smith (1992) has given arguments based on experiment for the contrary position, viz. that specialised circuits are important in development.

The object-class abstraction

We now turn to the issue referred to earlier, that of the language being sufficiently expressive to be able to characterise the great diversity of possible situations encountered and processes learnt, under the control of a supposedly fixed program. It is argued that the object-class abstraction of object-oriented programming can resolve this problem; the following account presumes some familiarity with programming languages in general and object-oriented programming in particular. The object-class abstraction can be viewed as derivative of an observation concerning regularity in the universe, which can simplify considerably the task of specifying ways of operating in the universe. In the usual formulation of this idea, one assumes that the world can be modelled as a collection of entities of a collection of specific types, known as classes. For each instance of a class that is encountered or is relevant, a corresponding 'object' is created in the computer's memory. The underlying assumption is that all entities of a given class are similar in that the same 'methods' can be applied to operations carried out with them (a method being a piece of code intended to perform a particular operation in conjunction with an entity in its class), provided that the features differentiating one member of a class from another are taken into account in the code for the method concerned. The object in the computer's memory contains the relevant information or processes, and the code for the method concerned, which is fixed, interrogates the memory whenever it requires information.

Such an approach to programming can be applied to our problem: the program does not need to know in advance about the details of any entity it may encounter, as long as it knows in general terms what kinds of things it can do with an object. A 'static' function (one independent of the properties of a specific member of a class) can look out for members of a class, and when a candidate is encountered it can engage in suitable heuristics designed to gain proficiency in the methods which the program expects to be applicable to members of the class.

The developmental process, itself included in the source code, works systematically towards the discovery of appropriate values of the relevant parameters, which when sufficiently confirmed become incorporated into the object in memory. Alternatively, if the heuristics do not achieve success, the candidate may be marked as invalid as a class member (for example by means of an inhibitory mechanism). This looking out for entity types cannot be done indiscriminately, since otherwise the system might spend too much time on trivial things. To avoid this problem, a strategical aspect of the program must be invoked, directing attention to entities in ways most likely to be value in the given context.

The Concepts in Action

Illustrations of the entity-class idea can be found with processes such as walking, for example the process of balancing. Balancing involves a collection of skills, for example detecting imbalance, correcting imbalance and maintaining balance. The general methods are similar in all contexts but the details have to be established by trial and error. When these have been discovered a neural system, equivalent to an object in object-oriented programming, is established which it is assumed can be called upon to generate on interrogation by the methods associated with balancing the signals needed to perform the relevant function. In different situations, a learnt balancing process may fail (for example if the ground is very slippery or one is on a boat) and then one learns new parameters and a new object is created. The parameters may take the form of pointers or links to other parts of the system if, for example, one discovers a process such as holding on to a particular object applicable in a particular context. Since neural circuits can learn relationships, the objects in the nervous system context can contain this kind of information as well.

A further example related to walking is provided by the process of changing direction, another fundamental aspect of the walking process just as balancing is. We propose similarly the existence of objects, or neural systems, encapsulating the knowledge relevant to direction and modifications of direction in particular contexts. For example, one learns how to change direction, to walk in a steady direction, or to orient oneself relative to an object. One may speculate that there exist innate heuristic mechanisms devoted to the relevant learning processes; for example walking in a steady direction may be learnt on the basis of the criterion that an object does not move relative to its background, while orienting to an object might be defined by discovering a direction-changing motion whose outcome is that subsequently the object concerned satisfies this same criterion when one progresses with walking, this criterion being indicative of motion towards the object. In all cases, once a 'method' has been established in a given context it may provisionally be used without continually checking that such criteria are being satisfied, but the programme may return to checking if an error condition is encountered.

Our general explanation for the details of the nervous system design is therefore as follows: during the course of evolution dealings with more and more classes of entity become incorporated into the design, on the basis of circuits that specify a set of methods appropriate to dealing with members of the given class. Such developments cannot be designated arbitrarily; they should only involve types of entity to which the above concept can apply, that is to say that there exist methods of dealing with the given type of entity once the details of any given case have been established and a corresponding object-module has been created. During the course of evolution more and more subtle kinds of entity are incorporated into the design, what is important being that there are methods associated with them that are of value. Details get added to the overall code to which the nervous system is assumed to correspond, allowing more and more types of situation and problems to be dealt with. The static members that look out for potential member instances to be investigated and incorporated within the system allow the potentials of each kind of situation to be fully exploited. A significant example of such a class is the hypothesised action-sequence class, characterised by methods corresponding to 'first_action()', 'next_action (action)', etc.; in other words during a sequence specific pieces of information are learnt (by forming links) which allow repetition of the sequence to be possible by applying the method first-action to begin the sequence and next-action whenever an action in the sequence terminates (as implied for example by code such as

action = sequence_instance.next_action(action)

causing information as to the next action to be looked up for the object 'sequence_instance'. Such a process could be iterated, allowing action sequences to be built up hierarchically.

Real computing systems tend to develop to a high level of complexity in time, as more and more frequently used process get added to libraries. A similar mechanism of building up of 'libraries' in response to the utility of their contents no doubt applies to the evolution of the nervous system.

Perception can be treated in a similar way. Perception probably begins with innate systems that can detect visual objects in a crude way. Object systems get created (not necessarily the same as systems associated with the object concept of the psychologists, as we shall see), which can then be linked in with activities where these objects participate; these systems then become elaborated by being connected with various kinds of feature detectors on the basis of whether the object is correctly recognised or not, leading to improved ability to recognise an object of relevance.

The present considerations are able to capture the subtleties relating to the object concept studied by the psychologist, development of which is known to progress through various stages. In the earlier stages it is considered that there is no object concept, in the sense that when an object is obscured an infant acts as if the object no longer exists. This can come about if the object structures discussed do not have any significant memory associated with them (although having been linked to feature detectors they will be activated again on exposure to the same object and so recognise the object). Tracking of an object through a period of occlusion is different, in that here we can usefully invoke a class of entity which can be called a tracking event: when such an event is detected it is linked temporarily to systems concerned with features of the moving object, so that during the lifetime of the 'tracking' object in the computer the feature detectors remain active, enabling the tracked object to be picked up again. This example illustrates again our key theme: moving objects and tracking such objects is an important component of perception, so a system develops for handling this kind of situation, containing various specialised methods. The tracking 'object' in the computer in this case exists only long enough to do its job: the question of whether something moving in the visual field is something that persists and has significance beyond the immediate moment is another issue, a different class of entity, to be dealt with by a different system.

Higher Cognitive Functions

This leads us on to consideration of higher cognitive functions. The theme here is that when we step outside the realm of the here and now different classes of entity are important, requiring new systems for processing them, and new types of memory for objects. A key concept in this connection is Karmiloff-Smith's 'representational redescription' (Karmiloff-Smith 1992), which is the idea that after one has gained competence at one level one starts developing new processes, involving different, more abstract, forms of representation. Initially, the information involved in executing an action is implicit information, necessary in order to perform the action at all. Later, different representations develop, which may be viewed as referring to possible actions rather than necessarily current ones. The present approach would lead us to postulate a 'symbol' type of entity, associated with a collection of methods or processes involving the symbol, e.g. looking for the symbolised object. Once symbols have been established in this way, they can be incorporated into other classes of entity. An example would be a symbolic representation of an action, such as jumping across a stream, the object containing a collection of pieces of information about the action, in symbolic form, so that the action can be envisaged without being carried out. The object would have a primary method involving performing the action, symbols being transformed into corresponding actions (by a process that, in the manner discussed by Karmiloff-Smith, admits of improvements, and adaptations to varying conditions, unlike the objects which generate action directly), as well as other methods that extract symbolic information for use in other levels of activity. A further basic method connected with symbolic representation of action, learnt from experience, consists of a process whose output is the outcome of the action. Planning more generally can be built up by creating chains of such objects. Investigation of further details, which are clearly quite complicated, lies beyond the scope of the present work, but the basis of the analysis would lie in the discovery and detailed characterisation of the entity types that are most central, together with the methods that make use of them.

More advanced processes may make use of entities such as rules. A rule, for example, can be envisaged as a class of object associated with methods (used in algorithms belonging to other classes) such as antecedent, consequent and condition. A part of cognitive development can be identified with the accumulation of rule-type objects in memory, and their subsequent use.

What stops such components of the hypothesised mechanism accumulating indefinitely? The answer, as with ordinary programming, is that a program, to be effective, has to make use of the best choices in order that it should not become too complicated. Certain classes of entity, such as rules, are fundamental to higher-level cognitive activity and the program of the mind must make use of them.

We come finally to language. In common with authors such as Arbib (2000), we assume that language is the natural consequence of the coming together of capacities that have nothing to do with language as such. We assume that these antecedent capacities can be characterised in terms of particular object-types and their corresponding methods. This class information may be inherited by classes devoted more specifically to language-like activity (inheritance translating, in the nervous system context, into reusage where appropriate of the relevant neural circuitry). The present approach would make close contact with linguistic studies, it being anticipated that the classes, variables and methods would correspond closely to those studied in linguistics.

The power of language appears to be related to the fact that specific languages are highly evolved systems, having power not available to generic systems by virtue of having become well-adapted to the needs of the users of the language. This suggests that use of a language should be equated with using a collection of resources (objects) belonging to a particular system, preference being given to using objects within the selected system. A system is not however completely fixed in time, as users may attempt to add to it to achieve goals not achievable within the existing system. Static methods seeking new members for classes are responsible both for this kind of expansion, and the expansion of an individual's system (typically a subset of a communal system) during the individual's acquisition of language. The objects of various types present in the nervous systems of the users of a system feature in the expression of the various methods utilising the information content implicit in these objects (e.g. the lexicon, grammatical practices). A detailed account of the kind of system envisaged would to a considerable extent parallel accounts given in linguistic studies, but would emphasise the way the development of a particular language was related to the discovery of specific algorithms employing language discovered by users, which in the model are equivalent to existing methods being applied to new members of a class that specify the parameters of the particular algorithm (a trivial example of this being the use of a new word to indicate a new kind of object). Again, the various details of universal grammar can be expected to be related to the constructs of the parts of the program that handle language. A more complete analysis of language would invoke models of how the various methods in the general language system would apply in specific contexts (or one might say a discussion of specific techniques using language to fit particular requirements, and thus account for their more general utility. One may anticipate a synthesis of processes motivated by linguistic models and processes of the kind used in computer design (an example of such synthesis being the hypothesis that a phrase, as conceived within linguistics, supplies or expresses the information contained in a single object. The whole subject of language is extremely complicated and no attempt will be made to follow up these ideas in the present work.

Summary and Conclusions

In summary, we have described a powerful new approach to the explanation of the capacities of the brain, based on integrating certain ideas from computer science into the neurosciences, leading in principle to very specific accounts of the functioning of the nervous system in programming terms. Particular neural circuits are seen as embodying methods (in the sense of object-oriented programming), able to achieve particular goals when they operate in conjunction with objects in a given class, themselves implemented as neural circuits of a given type, which get trained through experience to work effectively with the method-bearing circuits so as to achieve the intentions of these methods. The many component systems of this kind act as sections of a single program governing the behaviour of the whole nervous system, precise relationships of the kind utilised in compiler design ensuring that the parts will work coherently together in the way that the parts of a complicated program work together to create the entire integrated functional program. This integrated program embodies a strategy for seeking valid new instances of the various entity-types as is appropriate, and through this process cognitive development occurs.

The approach is not a purely theoretical or philosophical one, the details being related both to behaviour and to activity in the nervous system. It should therefore be seen as an adjunct to conventional approaches, allowing all the ways of studying brain and behaviour to be tied to a single unified model. However, the discussion in this particular paper is less of the nature of a theory than an introduction to the new ways of thinking possible when one applies ideas from computing to the very different situation of cognitive functioning. Ideally, this paper would have included detailed support for these ideas in the form of full working out of the ideas in the context of specific problems. That, however, would have required resources not available to the author, and it is hoped that others will take up the challenge instead.

Acknowledgements

I am indebted to Prof. Nils Baas for discussions of his hyperstructure concept which motivated a number of these developments, and also to Prof. Andrée Ehresmann for stimulating discussions. Drs. Hermann Hauser and David G. Blair contributed useful ideas in early stages of this project. I wish to thank Trinity College, Cambridge, and the University of Cambridge, Department of Physics for their support for travel and computing facilities.

References

ARBIB, Michael, The Mirror System, in Imitation and the Evolution of Language, in Imitation in Animals and Artifacts, Nehaniv, C. and Dautenhahn, K. (2000).

JAVA tutorial, The, http://java.sun.com/docs/books/tutorial/java/concepts/

KARMILOFF-SMITH, Annette, Beyond Modularity: a Developmental Perspective on Cognitive Science, MIT (1992).

MINSKY, Marvin, The Society of Mind, Heinemann (1987).

QUARTZ, Steven R. and Terrence J. SEJNOWSKI, The neural basis of cognitive development: A constructivist manifesto, Behavioral and Brain Sciences, 20(4) 537-596 (1997).