From: Axford Mike (mike@mfaxford.co.uk)
Date: Thu Mar 01 2001 - 19:17:41 GMT
http://cogprints.soton.ac.uk/documents/disk0/00/00/05/45/index.html
> STEIN:
>The calculation model of computation goes hand-in-hand with the idea of
>black box (or procedural) abstraction. This is the equation of a
>computation with the functional result that it computes over its input.
>Black-box abstraction is a powerful technique that permits reasoning
>about systems at a fairly high level, e.g., combining functional pieces
>without considering the details of their implementations. Without
>black-box abstraction, it is difficult to imagine that much of the
>history of modern software development would have been possible.
Axford:
This seems to be a good way of looking at a computational system, It
makes building larger systems much easier as we do not need to know
exactly how every thing works, It also makes simplifying a large
unknown system such as a person much easier as we can break this system
up into several smaller systems and we then have several smaller groups.
> STEIN:
>Certainly, the computational metaphor enabled computer science to focus
>on the organisation of sequences of steps into larger functional units
>without worrying about transient voltage levels or multiple
>simultaneous transitions within the hardware. This way of thinking
>about computation also let us ignore the occasional power fault, the
>mechanical misfire, delays in operator feedback, or other human
>activities. By hiding the details of hardware's actual behaviour behind
>artifices such as the digital abstraction, tremendous advances in.
>computational science were made possible.
>If we had to directly manage the variations in voltage across each
>individual component in our computer's motherboard, we would be hard
>pressed to write even small programs. The digital abstraction-looking
>at those voltages only when in stable configurations and even then
>regarding them as ones and zeros-allows us to treat the computer as a
>discrete logical artefact.
Axford:
This certainly makes life easier, Trying to program in analogue
electronics would be near impossible, The exact voltage across a
component can vary in the time domain depending on the environment. For
instance the temperature across a resistor will affect the resistance
and this can then affect the current flowing through it and the voltage
across it. It is much easier to work in digital signals and even though
errors can occur they will be very unlikely. It would require a huge
difference in the environment to make a substantial change in the
voltages and current to change one digital signal into the other.
> STEIN:
>A concrete example of the limitations of the calculation metaphor was
>provided by a senior developer at a major software company. Although
>his company is able to hire some of the best computer science graduates
>produced, he complained of difficulty finding students who can write
>programs in which many things are happening simultaneously. I
>originally assumed that he meant that their new hires had difficulty
>with some of the finer points of synchronisation and concurrency
>control. He corrected this impression, explaining that his problem was
>"journeymen programmers" who didn't know how to think concurrently. Our
>students are learning to decompose and solve problems in a way that is
>problematic even for today's software market.
Axford:
I think that most of us will struggle to think concurrently as it's
something we as humans struggle to do. The well known game of patting
your head whilst rubbing your stomach shows this quite well, It is very
hard to achieve this and takes a lot of concentration, Subconsciously we
can do several things at once but we find it very hard to consciously do
two completely separate things at the same time.
> STEIN:
>If the hardware-software line is fading, the line between the computer
>and its environment is following rapidly. In ritualised or regimented
>transactions, we are increasingly replacing people with computers.
>Computers answer our telephones, supply our cash, pay our bills, sell
>us merchandise. Computers control our cars and our appliances. They
>co-operate and collaborate with one another and with the world around
>us. The traditional metaphor, with its "what happens next?" mentality,
>leaves little room for the users or environmental partners of a
>computation. A new theory of computing must accommodate this fluidity
>between computer and user and physical environment.
Axford:
True, but these are all very simple examples. The computation in the
sense is still very basic, counting out money is not a particularly hard
task, a five year old child can do it.
> STEIN:
>Today's computations are embedded in physical and virtual environments.
>They interact with people, hardware, networks, software. These
>computations do not necessarily have ends, let alone results to
>evaluate at those ends. They are called agents, servers, processors,
>entities. As an evocative example, consider a robot. For a robot,
>stopping is failure. A robot is not evaluated by the final result it
>produces; instead, it is judged by its ongoing behaviour. Robots are
>interactive, ongoing, partners in their environments.
Axford:
Is this purely computation. In the description of computation we have
been using computation is symbol manipulation. Is that all such as
system is doing or is there more to it. A robot moving around a room is
using transducers (sensors and motors) so that it can move and avoid
bumping into things is a motor a symbolic device, is the movement part
of the system or is it an effect of the system. The electrical input to
the motor can be treated as a symbol it's either on or off, possible
having states between on and off for speed control. If this is purely
symbolic does this give cause to think that Intelligence can be done in
a purely symbolic form, a robot like this is showing some aspects of
intelligence, it knows not to bump into things.
> STEIN:
>For example, this approach makes it easier to contextualize
>traditionally hard-to-fit-in topics such as user interfaces. If
>computation is about what to do next, what role could a user possibly
>play? But if computation is about designing the co-ordinated activity
>of a community, a user is simply another member of the community within
>which the software system is embedded. Rethinking the computational
>metaphor turns the discipline on its side, giving us new ways to
>understand a wide range of phenomena.
Axford:
This appears to go against what we have traditionally been taught as
computation as a set of rules for going from one state to another. Does
our view of computation need to change or does this idea involve
computation and something else. I think that our view of computation may
need to change to involve interaction with forces outside the system
like a human user. No real life system works entirely within itself
there is always some form of outside interaction.
> STEIN:
>The kinds of questions to which this example lends itself typify the
>issues of modern software design. How reliable does communication
>between the entities need to be? (In this case, not every signal need
>reach the motor-monitor; lossy communication is generally adequate.)
>Whose responsibility is transmission of the signal: push or pull? (In
>this example, I have allocated that task to the sensor-monitor, a
>signal "push".) What kinds of latencies can be tolerated? (This depends
>on the mechanical properties of the robot within its environment.)
>Under what circumstances can this robot reasonably be expected to
>perform correctly?
Axford:
How does a lossy signal relate to computation and symbol manipulation,
How do we do anything with a symbol that might be there but might not,
Is this. For this to work we may need some form of feedback system, can
we do this in a purely symbolic form? I think it may be possible but
will pose several difficult problems, Many systems like this that use
feedback will take time to respond and will often overshoot the point,
In control theory it is impossible to make a system that can respond to
changes immediately and get be accurate in it's movement's there is
always a trade-off between accuracy and speed. This sort of trade-off
will also occur in any intelligent system that responds to it's
environment.
> STEIN:
>A robot is not this kind of beast. The left hand cannot wait for the
>right to conclude its computation; like a group of schoolchildren or a
>massive corporation, co-ordinated activity is its only viable option.
>In order to successfully program a robot, one must learn to think in
>terms of co-ordination frameworks, protocols, interfaces. This is the
>stuff of software engineering. Indeed, a brief experience programming a
>robot is a software lifecycle in an afternoon.
>One of the most interesting things about physical robots is that the
>world provides a sufficiently dynamic environment as to make precise
>behaviour almost non-repeatable. Lighting conditions change; initial
>positions vary; wheel slippage is unpredictable. These things change as
>the robot is executing. A robot does not wait for the world to complete
>its computation before acting; instead, the robot operates concurrently
>and interactively with the world in which it is embedded. As a result,
>running a real robot in the unconstrained real world invariably
>provides new challenges and new test conditions in a way almost
>entirely lacking in current computer science classrooms.
Axford:
This again is not traditional computation. As individuals we struggle to
perform two tasks simultaneously. As a group however we can do complex
tasks like this, maybe this should be a guideline for building
intelligent systems. We shouldn't attempt to build a single system but
have several similar systems that can work together as a group and can
give each of the other systems a suggestion which can then be used or
ignored similar to how a group of people work together in achieving a
common goal. Whilst individuals are intelligent on our own our real
intelligence comes out when we work together as a group in achieving a
common goal or set of goals.
> STEIN:
>One might imagine that this new system is built out of two distinct
>components sequenced in an entirely conventional way. First, the floor
>plan processing module would study the map and create a representation.
>Subsequently, Mataric's robot would use that representation to navigate
>to the desired location. This would certainly be the
>traditional-GOFAI-approach. The cognitive robotics story is not that
>simple.
>Instead, I exploited the robot's existing interactive properties.
>Rather than using an independent "map-processing" component, the robot
>interacts with the map as a virtual sensory environment, "imagining"
>that it is inside that environment. There is no separate
>map-to-internal-representation functional module. Instead, Mataric's
>existing robot-cum-community is coupled to (i.e., embedded in a virtual
>environment consisting of) a very simple piece of code that keeps track
>of x, y, and heading co-ordinates within the floor plan. This
>interactive map-entity processes "move" requests (generated by
>Mataric's original code) by updating its internal co-ordinates and
>returning simulated sensory readings as though the robot were actually
>standing at the corresponding point in physical space. This is the
>entire extent of the code that I added to Mataric's.
Axford:
This is getting similar to what humans do. We learn things like maps and
then use a representation of this map when navigating around a place.
For example when we first arrived at the university we didn't know where
things were but we used simple maps and did some exploring and know now
where things are. If robots can do the same thing is this a step towards
intelligence. If so does this mean we are getting a step closer to
Artificial Intelligence.
> STEIN:
>The recent emphasis on social cognition only adds fuel to this fire. If
>thinking in a single brain is communally interactive, how much more so
>the distributed "intelligence" of a community! Hutchins (1996) goes so
>far as to suggest that cognition-in his case of a naval navigation team
>guiding a warship-is necessarily distributed not just within a single
>brain but across a community of people, instruments, and culture.
>Computation as traditionally construed-the calculational
>metaphor-provides little leverage for these new theories of thinking.
>Shifting the computational metaphor opens up the possibility of
>reuniting computation with cognition. Like the electronic computer, a
>human brain is a community of interacting entities. This represents a
>fundamental change in our understanding of how we think.
Axford:
Maybe this goes even further, It's not just small communities but also
whole countries, the world and maybe even from the start of the human
race until Judgement Day. Does this mean that we will need to have lots
of interconnected systems to achieve real intelligence? Is intelligence
more a group thing rather than an individual thing? If we put a person
in a box for their entire life with no contact with the world outside
the box do they have intelligence? I'm inclined to think not, and that
it's the people around us that make us intelligent.
Mike Axford
This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:19 BST