Artificial Intelligence and the Web

ai

Creating artificial intelligence or an artificial mind has been a preoccupation of computer science (and science fiction) for decades, since the concept was introduced by scientists such as Alan Turing and John McCarthy. The original idea was built on the assumption that the human brain functioned like a machine – a body made up of small working components which individually were not conscious but together make a mind. Cartesian philosophy also provided tentative encouragement, as to Descartes being a person meant simply to be “a thing that thinks”, as well as which doubts, perceives, affirms, denies, wills, does not will, that imagines also, and which feels” – an array of qualifiers which largely translate to the kind of stimulus responses which machines are perfectly capable of.

However the belief that this could translate simply into machine code has proven rather naive, for various reasons.  Humans are capable of not just storing massive amounts of information, but are able to retrieve and use it “with remarkable efficiency” (Brookshear, 2012: 485). Although storing data is not a huge challenge for machines, understanding it as information in a wider context (as ‘knowledge)’ is. This brings us to the Web, and a more nuanced understanding of intelligence. The Web, and to a greater extent the Semantic Web, gives computer scientists the opportunity to work with a vast store of connected, contextualised information. Results derived from using this network do not attempt to mimic the human brain, but instead create results that are similar to human abilities, and yet are achieved in a different way (a constructionist rather than mimetic understanding of intelligence). The real and achievable aim with this is not to be able emulate what a human do in a given situation, but rather to solve a problem as well as possible.

Comments are closed.