Trust and Provenance
The notions of trust and provenance are critical to the effective operation of many open, distributed systems. They both seek to provide a means of helping to verify that an item of data, a service or an agent is what it says it is. This is typically achieved by a third party providing supporting evidence for the claim or by an agent's own direct experiences of the item.
In more detail, the concept of provenance is used in the context of art to denote the trusted, documented history of some work of art. Given its provenance, an object can attain an authority that allows curators, owners and collectors to understand and appreciate its value relative to other works of art. Objects without such trusted, proven history may be treated with some scepticism by those that study and view them. The same concept of Provenance may also be applied to data and information generated by computer systems. Today's software architectures, in particular for distributed and open systems such as the Web or the Grid, suffer from limitations, such as lack of mechanisms to trace results and infrastructures to build up trusted networks. Provenance enables users to trace how a particular result has been derived by identifying the various services or transformations that produced a particular output.
Related to this, is the area of trust and reputation. The former seeks to build up a model of an agent or a service provider based upon an observer's direct experiences of interacting with it, while the latter relies on opinions prvided by third parties. Now, in open systems in which a large number of agents that belong to different stakeholders need to interact to achieve their objectives, trust and reputation models provide a means to reduce the inherent uncertainty involved. Thus, on the one hand, agents may be able to use their trust model to (i) choose the most trusted interaction partners and (ii) model their interaction according to the trustworthiness of their opponents, while, on the other hand, interaction protocols (i.e. sequence of actions agents have to follow) may be designed to ensure that agents perform in a trustworthy manner and help in achieving the goals of the system designer.