1. In the ongoing discussion of the "frame problem" (McCarthy 1963; McCarthy & Hayes 1969; Hayes 1992) there has been a steady proliferation of problems and problem-names (in a way that is itself reminiscent of a frame problem!):
2. Van Brakel (1992, 1993) has listed a "family of frame problems," arising from the question of "[A] [w]hich things (facts, etc.) change and which don't?", "[w]hat are the necessary and sufficient conditions for an event[?]," "How can [A] be represented?", and "How can/do we reason about [A]?" (van Brakel 1992: 1.1). The family then includes "the persistence problem, temporal projection problem, inertia problem, qualification problem, ramification problem, extended prediction problem, installation problem, planning problem, holism problem, relevance problem, and so on" (1.2). These in turn are "more or less closely related to "the general problem of stating `laws of motion' which adequately describe the world" [Ford & Hayes 1991 (F&H): p. x]; the prediction problem [Tenenberg's chapter in F&H: p. 232]; the induction problem [Fetzer in F&H: p. 55]; "the general problem of default reasoning" [Perlis: F&H p. 190]; "the larger, and uglier, counterfactual validity problem" [Stein: F&H p. 225]; the "Frame Problem" for natural language understanding, learning, and analogical reasoning [Nutter: F&H p. 177]; and other problems." (1.4)
3. Hayes & Ford (1993: 2.4) go on to raise the ante still further with "the inference problem... [and] the perception problem or the updating problem" (and they could also have tossed in the the credit assignment problem and the variable binding problem). They see many of these problems as distinct. Van Brakel (1992: 2.2) instead suggests that "[t]he frame problem is a special case of the problem of complete description" [emphasis deleted] and Fetzer (1993a, b), that it is a special case of the "problem of induction."
4. So there seem to be both ecumenical and hegemonic sentiments in the air. For my part, I'd like to cast my vote for another unitary candidate (if only in the hope of keeping problem numbers tractable), one that might likewise subsume many of the rest as special cases. It too has already been cited in this discussion, but I don't think it was characterized quite accurately: "[H]ow can we ever attach `formal' symbols to the actual world? This is what Harnad (1990) calls the `symbol grounding problem'" (Hayes & Ford 1993: 4.2).
5. The symbol grounding problem is not just the problem of attaching formal symbols to the world, for an UNGROUNDED symbol system (like English or geometry) will serve that purpose admirably well (as long as it has the right formal, syntactic properties). The way ungrounded symbol systems manage to serve such purposes, however, is in being so used by US. The symbols need not have any intrinsic meaning of their own; they need only be systematically interpretable by us as meaning what they mean, and then our own minds and actions can mediate the connection between the symbols and what they can be interpreted as being about.
6. This is perfectly fine as long as our goal is only to build systems that are useful to us, for our minds can then always bridge the gap; but if these systems are meant to be models of US -- i.e., of what's going on in our heads, unmediated by what's going on in anyone else's head -- then their symbols had better be GROUNDED directly in the real-world objects, events, and states of affairs that they are otherwise merely systematically interpretable (by us) as being about.
7. An ungrounded symbol system is essentially like a book (whether the book is static, on paper pages, or dynamic, as in, say, a computerized dictionary or encyclopedia). It's obvious that a sentence in a book does not mean something in the sense that a thought in my head does. The sentences in a book (and all their systematic inter-relations -- with other sentences in the book, with the language as a whole, and with the truths and fictions about the real world) are merely strings of arbitrary formal tokens that are systematically INTERPRETABLE as being what they are about -- interpretable by thinkers like you and me, whose thoughts, on pain of infinite regress, cannot themselves be merely strings of arbitrary formal tokens that are systematically interpretable... etc. That is the symbol grounding problem: The connection between the symbols in a symbol system and what they are interpretable as being ABOUT must be grounded in something other than just the mediation of outside interpreters if they are to be candidates for what is going on in our heads when we think.
8. My own candidate solution happens to be to try to ground a system's internal symbols in its robotic capacity to discriminate, manipulate, categorize, name, describe, and discourse coherently about the objects, events and states of affairs that its symbols are systematically interpretable as being about (at a human scale, indistinguishable from the way we do it). In short, symbolic capacities are to be grounded in robotic capacities. An ungrounded symbol system has only one set of constraints: purely formal, syntactic ones, operating rulefully on the arbitrary shapes of the symbol tokens. A grounded symbol system would have a second set constraints, bottom-up ones, causally influencing its internal symbols and symbol combinations, constraints from the internal, nonsymbolic machinery underlying its robotic capacities, especially categorization (Harnad 1987, 1992; Harnad et al. 1991), which is what would allow the system to pick out what its symbols are about without the mediation of external interpretation.
9. Now what has all this to do with the frame problem? Formal systems do very well in the world of formal, Platonic objects: An axiomatic system can successfully "second-guess" all the truths of arithmetic (I hope no one will cite Goedelian limits on provability as instances of the frame problem!). Natural language seems to do almost equally well with the world of real objects, events and states (especially since mathematics and physics are subsets of natural language). In both cases, however, it is clear that the symbol systems do not "speak for themselves" (except perhaps for the mathematical formalist who claims that the only object he is interested in is the uninterpreted formal system itself): They are USED by us according to an interpretation that we HAVE IN MIND, which itself is connected to what it is about through our bodies (in particular, through our sensorimotor systems). There is no well-formed sentence in natural language (including "colorless green ideas sleep furiously") that we cannot "gloss," as long as all its terms are grounded in thoughts that are about what the words are about.
10. What is a typical instance of the frame problem then? First, it invariably involves an ungrounded symbol system onto which we have hitherto successfully projected a systematic interpretation: we may have attributed to it, for example, an understanding of a situation, because it encodes sentences that are interpretable as our own knowledge about the situation and it draws inferences, makes predictions and performs operations on the situation that again square systematically with our own. The frame problem arises when something goes radically wrong: when the system does something that does not square with our interpretations -- and not just in a minor way that can be remedied by adding another piece of knowledge, just as we would remedy any gap in our own knowledge, but a major incoherence, something that destabilizes our entire systematic interpretation. One often repeated example is that the system, which seems to "know" so much about what's going on in a room, may, to our surprise, behave as if it "believed" that everything in the room ceases to exist when one leaves the room, a circumstance with which it does not happen to have been challenged before, when we were confirming its conformity with our systematic interpretations.
11. The problem is described as calling for a means of framing what is and is not altered by a change, but it is clear that the "change" is not especially one that is caused by an "action," like leaving the room, but, in principle, by any new data. At any point, a symbol system has only dealt with a small amount of data (relative to human scale performance). That's why such systems are often called "toy" systems. Toy performance, relative to human-scale performance, is highly UNDERdetermined (just as a specific billiard shot is underdetermined relative to all possible two-body interactions: many theoretical interpretations of that shot are possible, but much fewer are possible for the set of all physically possible two-body interactions). Yet in projecting a systematic (usually natural-language) interpretation onto such a toy, one is at the same time OVERinterpreting it (typically overinterpreting it mentalistically, in terms of what it "knows," "thinks," "means"). And, in my view, a "frame" problem arises every time we run up against evidence that we have exceeded the limits of that underdetermined toy; evidence that we are overinterpreting it -- and have been all along.
12. The optimistic solution to "scaling up" is that more and more of the same -- more and more ungrounded sentences, pushing the frame's limits wider and wider -- will eventually shrink the remaining "frame problems" to only those that we, the interpreters, are also prone to. So we will either not notice them or cease to regard them as evidence that there is something wrong with this kind of model in the first place.
13. I am more pessimistic. I think the reason the frame problem keeps rearing its head is because there is something intrinsically wrong with an ungrounded symbolic approach to modeling the mind (if not to building useful tools for systems with minds). I do not think knowledge can be "framed" with symbols alone, be they ever so encyclopedic. I think nothing less than the real world of objects, events and states of affairs that the symbols aspire to be about is needed, not to "frame" the symbols, but to ground them -- in the robotic capacities that we life-size human beings so clearly have.
Fetzer, J. H. (1993b) Philosophy Unframed. PSYCOLOQUY 4(33) frame-problem.10.
Ford, K.M. & Hayes, P.J. (1991) Reasoning Agents in a Dynamic World: The Frame Problem, Greenwich: JAI Press.
Harnad, S. (1987) The induction and representation of categories. In: Harnad, S. (ed.) Categorical Perception: The Groundwork of Cognition. New York: Cambridge University Press.
Harnad, S. (1990) The Symbol Grounding Problem. Physica D 42: 335-346.
Harnad, S. (1992) Connecting Object to Symbol in Modeling Cognition. In: A. Clarke and R. Lutz (Eds) Connectionism in Context Springer Verlag.
Harnad, S., Hanson, S.J. & Lubin, J. (1991) Categorical Perception and the Evolution of Supervised Learning in Neural Nets. In: Working Papers of the AAAI Spring Symposium on Machine Learning of Natural Language and Ontology (DW Powers & L Reeker, Eds.) pp. 65-74. Presented at Symposium on Symbol Grounding: Problems and Practice, Stanford University, March 1991.
Hayes, P.J. (1992) Summary of "Reasoning Agents in a Dynamic World: The Frame Problem" (Ford & Hayes 1991, Eds.) PSYCOLOQUY 3(59) frame-problem.1.
Hayes, P.J. and Ford, K.M. (1993) Effective Descriptions Need Not Be Complete. PSYCOLOQUY 4(21) frame-problem.5.
McCarthy, J (1963) Situations, Actions and Causal Laws. Stanford Artificial Intelligence Project, Memo 2.
McCarthy, J and Hayes, P (1969) Some philosophical problems from the standpoint of Artificial Intelligence. In B. Meltzer & D. Michie (eds) Machine Intelligence 4. Elsevier.
van Brakel, J. (1992) The Complete Description of the Frame Problem. PSYCOLOQUY 3(60) frame-problem.2.
van Brakel, J. (1993) Unjustified Coherence. PSYCOLOQUY 4(23) frame-problem.7.