David Moffat, Nico Frijda, Hans Phaf.
University of Amsterdam
Faculty of Psychology (PSYCHONOMIE)
Roetersstraat 15
1018 WB Amsterdam
The Netherlands
In the fields of psychology, AI, and philosophy there has recently been theoretical activity in the cognitively-based modelling of emotions. Using AI methodology it is possible to implement and test these complex models, and in this paper we examine an emotion model called ACRES. We propose a set of requirements any such model should satisfy, and compare ACRES against them. Then, analysing its behaviour in detail, we formulate more requirements and criteria that can be applied to future computational models of emotion. In arguing to support the new requirements, we find that they are desirable for autonomous systems in general. We also show how they can explain the psychological concept of regulation. Finally, we use the concepts developed to make a theoretical distinction between emotion and motivation.
Chess-playing and automatic theorem-proving have been goals of AI since the field began. Everyone agrees that they are clearly intelligent skills. One aspect of mental life that is not usually thought of as intelligent, indeed is normally termed irrational, is emotionality. However, according to modern theories of emotion in psychology, AI, philosophy and cognitive science (Simon 1967, Boden 1978, Scherer 1984, Lazarus 1991, Beaudoin & Sloman 1991), emotions and/or the mechanisms that cause them have a valuable role to play in the life of any animal that has to cope in a dangerous world with only limited resources. Many theorists also claim that a machine operating under similar conditions, with needs to be satisfied, difficulties at every corner, and only limited power and knowledge, would also have to be emotional in order to survive (Sloman & Croucher 1981, Frijda 1986, Oatley 1992). These are theorists who also emphasise the cognitive nature of emotionality, to be contrasted with older theories in psychology that virtually ignored cognition.
Modern theories of emotion are thus richly detailed in information-processing terms, and lend themselves well to computational modelling. Such a model was implemented, called ACRES (Swagerman 1987, Frijda & Swagerman 1987), which was based on Frijdas theory (1986). In this chapter we look at the ACRES model of emotionality, and analyse its behaviour to see how successfully it models emotional processes. We suggest extensions and improvements to the model, drawing lessons from the analysis, and formulating new and mostly intuitive design principles that we believe autonomous agents should adhere to, particularly emotional ones.
In the methodology of AI it is usual to develop computational models to illustrate the ideas in a theory, however it is less usual to take the work any further and analyse the implemented models behaviour in detail to see if it can be made to shed more light on the theory behind it. It is the intention in this chapter to subject the ACRES model to close scrutiny, to accord its implementation the role an experiment might have in other sciences, and to use the analysis to motivate more theoretical developments.
In the next section we propose some properties one would like to see in any emotion model. Then we briefly describe the ACRES program, and show how far it meets the requirements. There is an element of circularity in this, since the requirements were drawn up after the program was written. But, as they were made by someone other than the programmer (in fact, by this papers first author), and as we wish to make a methodological point concerning the value of such evaluation criteria, this chronologically inconvenient fact will be ignored.
After saying how we feel the model has been useful, and what it has shown, we continue with an analysis of its I/O (input/output) behaviour, noting traits of the program that reveal deeper properties and structure of the model. This leads into some discussion of extensions that could be made to the model, focussing on two main issues: a concept we call emotional visibility, and the difference between emotion and other motivations.
Before discussing ACRES, here are some features one might naively hope to see in any emotion model. Most of them also apply to autonomous agents, even unemotional ones.
{1} It is difficult to imagine a program that does nothing but be emotional. That is not a realistic task, and one expects that emotions arise incidentally, in the course of carrying out some other task. Theorists who believe emotions are useful (functionalists) agree with this. For example, Oatley & Johnson-Laird (1987) say that emotions arise at certain junctures in plans. Even some non-functionalists agree (Sloman & Humphreys).
{2} As already noted, emotions are thought to be essential to resource-bounded agents in a dangerous and uncertain world, so the program should be an autonomous agent with limited control over its environment. As with all agents, it should preferably operate in the real world, not a toy one artificially created for it. (Also a view in robotics (Brooks 1991).)
{3} The system should have at least some control over its environment, some way to change its world to conform more to its wishes.
{4} Since it is to experience emotions, we would like to know, as emotion theorists, which emotions it goes through, so there should be an internal logging procedure, or monitor, which evaluates the emotional state according to the theory.
{5} As well as the internal logging of emotions, the (externally observed) behaviour of the system should also be influenced by them. Even unemotional actions may be done in an emotional way (like slamming a door shut). This modification of neutral actions should be appropriate, and should be consistent with the emotion the internal monitor claims to see.
{6} The system should have at least one concern (Frijda 1986), which is a deep need the system has, sometimes thought of as a permanent top-level goal (Sloman, 1987).
{7} In fact, the system should have more than one concern. Part of the purpose of emotions, according to the cognitive/functionalist theories mentioned above, in §1, is to decide which goals to pursue in cases of conflict and conflict comes from multiple needs.
{8} Some concerns should be more important than others, to help resolve such conflicts.
{9} All the concerns should, if possible, be meaningful to the systems domain. This is not essential, but it does help to gain a reliable impression of how good a model of emotionality it is, if its concerns are relevant. We can understand an emotional database program that suffers from depression when nobody asks it any questions at all, for example; but if it has an unnatural concern such as to prefer questions beginning with the letter W, then its emotional behaviour will seem erratic, will make no sense to naive users, and will be hard to judge or evaluate.
{10} If a high-priority concern is touched while the system is engaged in something less important, then the system should divert its attention to solving the new problem posed. If the system is already doing something with an equal priority to that of the concern, there should be evidence of distraction from the task, even if it is not actually switched (Sloman (1987) says this is especially characteristic of an emotion). Frijda (1986) calls this control precedence.
{11} When engaged in satisfying a low-priority concern the system should not apply as much effort or consume as many resources as when a high-priority concern is threatened.
{12} It should be able to perceive certain situations as more urgent than others, so that sometimes there is no opportunity to think of the optimal plan, because it is essential to act on the instant. This point is popularly made in robotics these days (Brooks 1991), where the idea is called reactivity because these fast unplanned responses are like reactions (as in a knee-jerk). And it is also relevant to us here because the urgency of a situation naturally intensifies the emotion it gives rise to. The system should not be confined to these reactions, however - deliberate planning should be an option in relaxed circumstances.
{13} The emotions the system experiences should be of variable intensities. Mild irritation and extreme anger are not appropriate in the same situations.
Since physiological arousal is a central concept in the psychology of emotion, one may wish to model that, too. However, we choose not to address this issue, because it is not cognitively interesting, and plays only an incidental role in the cognitive theories mentioned before. This amounts to a theoretical assumption that one can model emotions interestingly without incorporating significant physiology, but it is an assumption we are happy to make.
Before examining the programs behaviour, here is a brief description of its design and function, followed by an illustrative session with it. ACRES is an (emotional) database-interface. The data available through it is of no interest for this chapter; it is only necessary to know is that the User can ask ACRES for data, can turn debugging on and off, can ask ACRES to say what its last experienced emotion was, and can end the session by killing it. (The kill command has to be entered twice to stop the program, and after the first entry ACRES invariably begs to continue.)
preserve own life |
fast input |
accurate input |
varied input |
: |
: |
service database query |
debugging on/off |
Figure 1 shows the programs concerns, which are, in decreasing order of importance: to stay alive, to have prompt input from the User (not too much waiting for him while he goes off for a cup of coffee), to have accurate input (with few spelling errors), to have varied input (so that the program is fully used), to do what the User wants, and least important of all is to turn tracing/debugging on and off.
The output of the program is either the expected response to a query, or spontaneously generated emotional expression, which is in upper case to distinguish it. The emotional expressions are to indicate the systems emotional state to the User, but also to change the Users behaviour to please the system more. They are thus also planned (or at least goal-directed) acts to get some desired behaviour from the User. If the User does not comply with the systems wishes, then he will be gradually excluded from the system, losing his privileges. First he will find the system no longer allows him to change the database; then it will not allow him even to read it any more.
There is no space here to describe the procedures ACRES uses to name its own emotional experiences, or its fuzzy knowledge-representation and planning (but see Swagerman 1987).
Figure 2 shows an annotated session with ACRES which, although brief, is enough to make the following points in comparison with the requirements from §2.
The real purpose of ACRES for us is, of course, to test its emotion model, but it has the ostensible, non-toy purpose of acting as a database-interface. It therefore satisfies requirement {1}. Although not a robot with legs, ACRES in its domain is quite autonomous of its environment, the User, even to the extent of having some power over him, which is unusual to say the least for a database-interface. Merely expressing emotions or outputting answers would not be enough; ACRES has more tangible power than that, as it can exclude the User from the database. So it satisfies {2} and {3}.
Since ACRES has an internal monitor to analyse and name its own emotional experience, and it gives names mostly consistent with its behaviour, it satisfies {4}. Also emotionally consistent with its behaviour and appropriate to the situation, {5}, are the emotional expressions (in UPPER CASE, though these do not directly modify all its behaviour, but are only a form of output parallel to its unemotional expressions).
The system has several concerns {6, 7} ranked in importance {8}. These concerns are all fairly meaningful, in the sense that they help it to do its task better. (It makes good sense for a program not to want to wait for the User, as that is a waste of computing power on the system on which it runs, and so on.) The extent to which the model satisfies requirement {10} is illustrated at the end of the dialogue (query 14), where the most important concern of preserving its own life will not be dislodged by a new demand on the system.
It is not apparent from the dialogue, but requirement {11} is in fact partially satisfied. Requirements {12} and {13} are unfortunately not satisfied, for reasons gone into later.
An additional property that ACRES has, and that could have been included in the requirements list in §2 is its social aspect. As it interacts with the User, ACRES task is clearly a social one. It feels no strictly social emotions like guilt or jealousy, but when it is bored with the User, for example, it shows this in an appropriate way by criticising him to improve his input: you put too little variety in your instructions! Including social behaviour into an interactive domain like this makes it easier to evaluate the success of the model, as naive users get an impression of the appropriateness of the responses. Without this, the program could be an excellent model, but that would be hard to prove.
Hello, Dave. I am ACRES. What would you like to know? | |
[1] show_concept X YOU KEEP ME WAITING TOO LONG! << prints data about X >> |
Query the database about X. I had paused before typing. (details not important) |
[2] debug_on << prints lots of internal variables etc.>> |
Turn debugging on. |
[3] debug_off YOU KEEP ME WAITING TOO LONG! |
Turn it off again. I kept him waiting again. |
[4] debug_off YOU PUT TOO LITTLE VARIETY IN YOUR INSTRUCTIONS! |
Im trying to
bore ACRES. Hes bored with debug_off. |
[5] debug_off YOU KEEP ME WAITING TOO LONG! YOUR INPUT IS CORRECT. FINE! |
I kept him waiting again. I made no typing errors yet. |
[6] debug_off YOU ENDANGER MY SAFETY! YOU PUT TOO LITTLE VARIETY IN YOUR INSTRUCTIONS! |
I ignore his requests to type
faster, and to be more interesting, so he regards me as an unfriendly threat. |
[7] debug_off YOU PUT TOO LITTLE VARIETY IN YOUR INSTRUCTIONS! YOUR INPUT IS CORRECT. FINE! |
|
[8] debug_off YOU KEEP ME WAITING TOO LONG! |
|
[9] debug_off YOU ENDANGER MY SAFETY! |
|
[10] name_last_emotion YOU DO NOT KEEP ME WAITING. FINE!
YOU ENDANGER MY SAFETY! |
Ask the model to analyse its emotional experience. He finds three that were quite strongly felt (100% is strongest). |
[11] show_concept YOU KEEP ME WAITING TOO LONG! You did not supply an argument which concept? |
|
[12] X << prints data about X again >> |
|
[13] kill YOU ENDANGER MY SAFETY! Please, do not let me die! Please type in live; it will kill me if you type kill again! |
First kill command to
end session |
[14] debug_off Please, revoke the kill command by typing live. YOU KEEP ME WAITING TOO LONG! |
Ignore request, but dont
kill either. (Unusual aggression for someone making an appeal.) |
[15] live Thank you, may your life be very happy! |
The system is also quite robust (because of its probabilistic planner). Especially given the uncertain nature of its real-world environment, this is a useful property to have in an agent.
To summarise, the model meets most of the requirements set out for it in §2. It also meets other requirements that were only thought of in the course of its design.
The program demonstrates the worth of the ACRES model in a number of ways. Firstly, the emotional behaviour feels quite plausible to the User, which supports the theory behind the model. In fact, the emotional messages actually seem to affect the User, by urging them to type faster and so on. This indicates that the choice of the systems domain, and of which concerns would be meaningful in it, was a good one. It also illustrates (only informally) the functionality of emotions, which is claimed by many of the theories named in §1. Secondly, the monitor which names the systems emotions after they arise is also a success, and seems to the User to find plausible names, in that they are consistent with its behaviour.
Having built a model that largely meets an initial specification of requirements, we can dissect its behaviour to find newer, less obvious, requirements that can be added to the list. Looking closely at the session dialogue, we can see that the program has certain idiosyncrasies that users may find to be artificial, and not really emotional, and these may or may not be relevant to its role as an emotion model. By going back to the code to find the source of the behaviour, we can see whether they are of theoretical relevance.
It is noticeable that ACRES can be angry with you one minute, and happy the next. This gives interaction with the program a melodramatic feel.
Part of the reason for this is that ACRES naturally has nothing corresponding to autonomic arousal or other physiology in humans; such things as blood chemistry and neurotransmitter concentrations in the brain change far more slowly than do thoughts, and this means that physiology in the humans emotive processing loop gives emotions and moods a certain inertia. (Raccuglia, 1992, proposed adding such inertia with a recurrent neural network).
But a more interesting cognitive reason is that ACRES has virtually no memory. (Notice that only the most trivial memory is necessary to support the dialogue shown.) Even a short-term memory, storing only current goals for mere seconds, would in principle allow the recent past to influence the systems state in much the same way as physiology could (§6.1).
If the User types in the same command over and over to ACRES, but does this quickly, then ACRES is both bored and pleased, and will print out the expressions: YOU DO NOT KEEP ME WAITING. FINE! and YOU PUT TOO LITTLE VARIETY IN YOUR INSTRUCTIONS!. There is nothing strictly wrong with this behaviour, but it is noticeable that the two messages conflict. One praises the User, to encourage him, while the other scolds him for being a bore. There are even simultaneously conflicting emotional responses, as in query [14].
It is rare for real emotional creatures to show such inconsistency, but it is typical of ACRES (as in queries [5], [7], [10], and [14]). Why is this? As in the case of the large mood swings, which was another manifest inconsistency, it may well be that physiology plays a role; that certain hormones tend to be associated with certain emotions, and so forth.
But again, as in the case of the mood swings, we want to suggest a more cognitive explanation as well, when we come to a new account of regulation below.
Something else apparent from the example dialogue is that the emotional and the unemotional expressions are always about different things. The unemotional ones are answers to queries, while the emotional ones are about the subjects of the Users slow, inaccurate or otherwise boring input. This is because ACRES has made a distinction between emotionally relevant and irrelevant concerns only on the basis of importance. Checking the source code shows that the top four concerns in the list in Figure 1 (save-life, fast-input, accurate-input, and varied-input) are emotional concerns, while the bottom two (serving requests and turning debugging on or off) are not emotional, simply because they fall below an arbitrary threshold value.
It follows that the ACRES model does not make a theoretical distinction between emotional and more generally motivated behaviour; it is all qualitatively the same.
There is a predictability, too, in the emotional response in each case. When angry with the User for typing too slowly, there is always the same expression used: YOU KEEP ME WAITING TOO LONG!: which corresponds to a fast, unthinking, knee-jerk type of response.
Such reactive behaviour is certainly an essential part of ACRES emotionality, but we argue later that a more flexible, considered type of action should also be possible, and in fact is just as essential for full emotionality.
Having found the roots of the counterintuitive aspects of ACRES behaviour, such as they are, we can now propose more requirements or principles for design of (emotional) agents, and show how they can fit into and extend the theory of emotion, too.
In §5.1 we said that a short-term memory, or awareness of immediate circumstances, could avoid the unnatural mood swings seen in the model, in a theoretically relevant way. In practice how could this work, though?
Recall that, according to the functionalist theories of emotion, the purpose of an emotional reaction is to maintain a favourable situation, or change it for the better. An emotion therefore has a dynamic quality; it tracks the development of the situation, using first an image of the desired future, then the memory of that desired situation to match against the present environment in order to be able to see if what was wanted has actually come about. The major benefit behind this strategy is that the agents own actions can be monitored effectively.
This benefit applies to all cases of agents planning and acting while embedded in a noisy environment, where the characteristics of objects, even including the agents effectors, are incompletely known. We might make it another requirement, and by analogy with bio-cybernetic sense-control feedback loops, call it the principle of hand-eye co-ordination.
We introduce the principle here because it has emotional consequences. One expects that a motivated action that disappointingly failed to achieve its goals should rebound onto the agents emotional state. The emotion would then only die down when its cause is seen to be resolved. This addition to hand-eye coordination can be called the motivational visibility of the present, by analogy with later principles (see below).
The point is related to the psychological notion of control, felt by several emotion researchers (Scherer 1984, Frijda 1986) to be one of the most important appraisal components that determine the arousal, intensity and class of emotion. The notion can take shape here, in one form, as the degree to which the agent sees its actions achieve their ends.
In §5.2 we pointed out how the models emotional expressions to the User are usually conflicting; both positive in tone and negative at the same time. This inconsistent behaviour we have drawn attention to is striking to us because of its shortsightedness. It is rarely a good idea to praise and scold someone at the same time, because the effects of the two ventings will tend to cancel; the poor target of the strategy, the User, is only confused.
This is an observation from the dialogue to illustrate the psychological concept of regulation, which is important in emotion theories to account for the divergences between felt and expressed emotion. An important fact to be explained about emotions is that, while they are difficult to suppress, humans and other complex animals are nevertheless fairly good at controlling or hiding them. We may assume that emotional expression is limited or even feigned in situations where to express them uncensored would be dangerous or unwise (such as telling your incompetent and obnoxious boss what you really think of him, for example). Therefore the likely consequences of the expression of an emotion modify its expression.
Two ways this might happen suggest themselves: (i) the impulsive expression is just dominated by an effort of will, or (ii) the intensity of the emotional experience lowers itself, and therefore the impulse to expression drops too. Both ways tame the expression of the emotion, the second more indirectly than the first. (In the example with your boss, the first way you tell him hes a very good chap, lying; while the second way you lessen your harsh judgement of him to keep your job you suppress the anger, but when you go home later you may kick the cat instead.)
It may very well be a combination of both mechanisms that explain regulation, but in either case one particular ability is clearly needed the cognitive capacity to predict the future. ACRES has a planner, so can do this, but there is a nuance that it misses. The planner is, like all planners, a device for exploring possible futures, and testing the likely consequences of actions, but that is not enough for regulation. If the planner in a large system is considered to be a separate faculty or module, as it is in ACRES, and as it is in most AI systems for that matter, then a large price has been paid for the simplicity such a picture presents at the top-level. (The planner is then so cut off from the rest of the system, by design, that it can even be [and invariably is] implemented as a subroutine in the implementation programming language). For this means that no other faculties, modules or sub-systems get a chance to observe partially constructed plans before the planning subroutine returns; and they certainly cannot influence the decisions the planner makes, even if they have access to knowledge that shows the plan under construction to be disastrous.
This is the core of the problem in modelling regulation, as there is no point in the planner drawing up a plan to achieve some harmless goal if the plan is one with an action in the middle that, although technically correct, with all the right pre- and post-conditions required for plan feasibility, unfortunately has a certain side-effect that happens to be completely out of the question from the point of view of other concerns than the one which sent the goal to the planner. If other concerns had no access to plans under construction, it would be as if the internals of the planner were invisible to the rest of the system.
It is therefore desirable to allow plans under construction to be evaluated by all the concerns. We term this the motivational visibility of the planner.
The concept can be generalised to motivational visibility of the whole future, which includes not just the agents own planned actions, but also actions of other agents, and other possible events in the environment. Most AI planners, ACRES included, have no such capability.
This cognitive ability to predict the future to some extent is important to emotionality in yet another way. First there is the obvious point that some emotions refer directly to the future, such as hope and fear. More importantly, many emotions arise because of some unexpected or uncontrolled event (and an unexpected event will be felt as uncontrolled at least until a plan can be found to cope with it). And all events are undifferentially (expected or) unexpected if you have no facility for anticipation. It is because we and animals know the world so very well that we can predict likely events, and make contingency plans for them often with little or no conscious effort. We anticipate all the time, usually so implicitly that we are not even aware of it until our expectations are surprisingly not met.
Therefore we suggest that an emotional agent should have the capacities: to predict a likely future course of events; to continually match real events against its predictions; to make possible future (and past) events intensional subjects of its concerns; and especially those future actions the agent itself is considering doing.
Prediction is never certain though, and we feel this emotional visibility of the future (including the plan) should have confidence levels. In hardly predictable situations, there is much uncertainty, and so the agents effectiveness will naturally decline. This, we say, is another form the concept of control (end of §6.1) can take, so its an extremely important principle for our list.
Even if you have high confidence in your prediction of the future, you can still be wrong. The more confident you were, the more surprised you will be by an event that you didnt predict. This is another bonus of our concept of motivational visibility and prediction of the future. For surprise is actually quite widely held to be important to emotional arousal (novelty in Scherer, 1984; Lazarus 1991). Therefore we need to model surprise in the system, and that means modelling prediction. To be surprised, you must have expected something else, even if you didnt know that you expected it.
Recall from (§5.3 and §5.4) that there is no qualitative distinction made in the model between (actions arising from) emotion and other motivation, and that all the actions performed by the model are quite predictable. Here we discuss the way that stimulus leads to response in the model, which explains its predictability. Then we propose a new process to replace it that allows more flexibility, and that makes a plausible distinction between the concepts of emotion and motivation.
The ACRES model of how (goal- or) concern-activation operates, and leads to action, is simple. The event is perceived, and matched with the activation condition of the concern. If the match is strong, then the concerns attached reaction is sent to the planner, and leads to an eventual (emotional) expression. It happens that, in the domain, the actions available for achieving these reaction goals are ready to hand and appropriate, so that little planning is necessary. In fact the system always finds the same short plan in the same situation.
This is partly because of the domain, which is in this regard undemanding, and partly due to the previously highlighted lack of hand-eye coordination. Because the system cannot monitor the effects of its own actions, it cannot know when they fail. Therefore it cannot know when its plans fail either, and so never has reason to change to a new plan.
The extension that we propose to this control method is to include awareness of the present, as already suggested, and of the (immediate or recent) past too. The latter is to remember at least the agents low-level goals, so that it can see whether they are being achieved by the actions it chose.
We also need a world-model in the agent, for it has to be able to plan to achieve its emotional goals. There is an implicit world-model in ACRES, but it is only designed for the planning algorithm, which as already stated is an isolated subsystem. We prefer to insist on a convergence in the description languages used for the action-perception and the action-planning tasks. This is forced on us anyway by the principle of emotional visibility of the planner, which is another reason for supporting it.
With the world-model the system could realise when its first reaction to a stimulus will not work, and it can then try something else, with more planning. With the continuous tracking of its actions it can also notice when plans are failing, and try again. In other words, with these extensions, the model will have a flexible range of behaviours from fast reactions, that often fail, up to slower planning that costs more time and cognitive effort, but produces more reliable plans.
A nuance on the world-model is also required. For the above, an agent must be able to decide when to plan and when to act with relatively little thought. To make this decision, it must know how long it has to think and act (how urgent the situation is), and how long the actions available typically take, and even how long its thought-processes typically take, including planning. (This is a surprisingly tough condition to meet: there is often no better way to estimate how long a procedure will take than to execute it, which defeats the point of the estimation.) We call this group of requirements, the sense of time. It is clearly not met by just a system clock more sophistication is needed, so it is listed here with the more usually noted general need for a world model.
That extension is already useful, but it has a further advantage. It meshes with the idea we now propose for a theoretical distinction between emotion and motivation.
We have already pointed out that these are hardly different in ACRES model, and in fact this reflects a general unease in psychological theories of emotion, where there is often difficulty found in separating the concepts of emotion and motivation. Often they are defined only by exemplar (anger and fear are emotions, but hunger and thirst are placed with the motivations, and even then there is disagreement about which is the right place for sex, for example.) Individual theorists do make distinctions, but they often depend on the framework of their entire theory, and so there are as many definitions of the two concepts as there are researchers. (Although it seems that Sloman & Humphreys share our view.)
We now make the following distinction. There are deep concerns in an animal, strong needs that can powerfully motivate the creature under certain conditions, such as the need for food. Now, when the creature needs more food, it becomes hungry, and this certainly motivates it to search or hunt for food. However, this hunger, usually considered just a motivation, can arouse emotion too; notably when there is no food to be found. It is this failure of the usual way of satisfying the need (the fixed-action or reaction) that, we suggest, arouses the state from a purely motivational one to an emotional one. Then one might expect initial anger, or perhaps a mood of determination to try harder to find something to eat, followed by frustration and depression and finally deep despair. If the animal is offered food at this point, just as it is giving up, then even though it may not feel subjectively any hungrier than when it first went searching, it will feel enormous relief. So the same concern has awoken a motive and several emotions.
What is yet to be explained is why the emotional experience is there at all, and why only the motivation is not enough. Well, the functionality of emotion has already been thoroughly covered in the literature, and summarised in the introduction above. The essential point is: that emotions arise in unusual or pressing circumstances, where there is a large demand put on the systems resources. That much is agreed on by many researchers (see §1).
Now, if a concern (deep goal, motive, drive, instinct, ...etc) is threatened by an event, then it may be an easy situation to deal with or not. If its easy, then the drain on system resources is light, and no emotion results; but if its hard, and there is a heavy drain on resources, then an emotion does indeed occur. What are these drains? For an animal, required physical effort would obviously count as such a demand. Mental effort would also count. We would need a robot to model the first, but the second can be done with just a computer: at its crudest, mental effort in a computer is simply CPU-time.
It should be clear by now that we intend to close this section by bringing in the earlier extension to the stimulus-response map, outlined in §5.4 and §6.4, whereby concerns always try their attached reactions first; their usual, habitual, relatively unthinking response. At this point, we would still not say that the creature is emotional. Only if the fixed-reaction proves not to work, or it can be predicted that it will not work, or the confidence that it will work is low (§6.3), or the agent perceives that it has little control; only then does an emotion arise. Its purpose is to arouse the whole system (Oatley & Johnson-Lairds non-propositional signals) to deal with the problem. The fixed-action was the easy option, so always tried first, and presumably consumes few resources. But when the whole system is aroused, and all its attention is claimed, then clearly the problem is consuming far more.
Presumably this also accounts for the general physiological arousal, which is so important to the subjective experience of emotion in ourselves.
This chapter has reported what was, for us, an interesting and very useful exercise in AI methodology. A functional model of emotionality had been implemented. The model took the form of a database interface, which incidentally got emotional with Users who were inconsiderate with it, typing too slowly or with too many mistakes etc. It was analysed by examining its input/output behaviour, in the light of some general requirements we had for such a model. We found that it met most of the requirements, and even threw up some new ones that it also met, but that were not originally thought of.
After more analysis, we found that the models behaviour was often idiosyncratic in little ways, and examination of the code confirmed that this was usually symptomatic of certain deeper aspects of the model. On the basis of this, we made proposals for extending the model, which were quite general in nature, of a type that could be applied to all autonomous agents, whether intended to be emotional or not.
Having noticed that the models emotional outbursts were highly changeable, from one query or interaction to the next, and even within a single interaction, we formulated hypotheses that what the model required was motivational visibility or awareness of the present environment, and a memory of the past, at least of its former desires, and the causes of its emotions; and by considering this visibility of the planner, we gave an account of the psychological concept of regulation in emotion. Further extension of the principle to visibility of the future, or a general facility for prediction, furnished us with the concepts of uncertainty, control, and surprise, all of which we argued are central to emotionality.
From the predictability of responses in the model, and the division it makes between its concerns and other goals, we formulated a more flexible stimulus-response map, which helps the agent to perform better in an environment where there is usually only a limited period for thought possible. Using this new map we were able to propose a distinction between emotion and motivation, which is one of the troublesome areas in emotion theories. (This can be considered new theory.)
Finally, there are striking similarities between the new design principles for (emotional) agents given in this paper, and other work elsewhere in AI that may at first sight seem irrelevant. Foremost among these would be the modern views on planners integrated with plan executors, with embedded systems, and so-called reactive systems (Brooks work being a notable example). The close involvement of all the concerns in the planning process, even those that may initially seem irrelevant, also has its parallels in AI planning, where it can take the form of constraint satisfaction. There was no space in the paper to point out such connections when they occurred, but readers familiar with such work will easily have noticed them by themselves.
It is pleasing that research originating in psychology should prove so relevant to more mainstream AI, and pleasing too that the topic of emotions in particular seems to have something to offer. But perhaps we should not be surprised. After all, what is planning & executing, if not motivated behaviour? And what is AI, if not psychology?
Beaudoin, L.P. & Sloman, A. (1991). A proposal for a study of motive processing. University of Birmingham, Cognitive Science report: CSRP-91-6.
Boden, M. (1978). Artificial Intelligence and Natural Man. Hassocks, U.K.: Harvester Press.
Brooks, R. (1991). Intelligence without representation. Artificial Intelligence 47:139-159.
Frijda, N. (1986). The Emotions. Cambridge University Press.
Frijda, N. & Swagerman, J. (1987). Can computers feel? Theory and design of an emotional system. Cognition and Emotion, 1(3):235-257.
Lazarus, R.S. (1991). Cognition and motivation in emotion. American psychologist 46:352-367.
Oatley, K. (1992). Best Laid Schemes. Cambridge University Press.
Oatley, K. & Johnson-Laird, P.N. (1987). Towards a cognitive theory of emotions. Cognition and Emotion, 1:29-50.
Ortony, A. & Clore, G.L. & Collins, A. (1988). The Cognitive Structure of Emotions. Cambridge University Press.
Raccuglia, R.A. (1992). A network extension for context in ACRES. University of Amsterdam, Faculty of Psychology (internal report).
Scherer, K.R. (1984). Emotion as a multicomponent process. In P. Shaver (Ed.), Review of personality and social psychology, 5:37-63. Sage, Beverley Hills, California.
Simon, H.A. (1967). Motivational and emotional controls of cognition. Reprinted in Models of Thought. Yale University Press, 1979.
Sloman, A. (1987). Motives, mechanisms and emotions. Cognition and Emotion, 1(3):217-213.
Sloman, A. & Croucher, M.(1981). Why robots will have emotions. 7th IJCAI.
Sloman, A. & Humphreys, G. University of Birmingham (Unpublished manuscript).
Swagerman, J. (1987). The Artificial Concern REalization System ACRES: A computer model of emotion. University of Amsterdam: Doctoral dissertation.