| Skip to main content | Skip to sub navigation |

This is now an inactive research group it's members have moved on. You can find them at their new research groups:

ECS Intranet:
Trust and Provenance


The notions of trust and provenance are critical to the effective operation of many open, distributed systems. They both seek to provide a means of helping to verify that an item of data, a service or an agent is what it says it is. This is typically achieved by a third party providing supporting evidence for the claim or by an agent's own direct experiences of the item.

In more detail, the concept of provenance is used in the context of art to denote the trusted, documented history of some work of art. Given its provenance, an object can attain an authority that allows curators, owners and collectors to understand and appreciate its value relative to other works of art. Objects without such trusted, proven history may be treated with some scepticism by those that study and view them. The same concept of Provenance may also be applied to data and information generated by computer systems. Today's software architectures, in particular for distributed and open systems such as the Web or the Grid, suffer from limitations, such as lack of mechanisms to trace results and infrastructures to build up trusted networks. Provenance enables users to trace how a particular result has been derived by identifying the various services or transformations that produced a particular output.

Related to this, is the area of trust and reputation. The former seeks to build up a model of an agent or a service provider based upon an observer's direct experiences of interacting with it, while the latter relies on opinions prvided by third parties. Now, in open systems in which a large number of agents that belong to different stakeholders need to interact to achieve their objectives, trust and reputation models provide a means to reduce the inherent uncertainty involved. Thus, on the one hand, agents may be able to use their trust model to (i) choose the most trusted interaction partners and (ii) model their interaction according to the trustworthiness of their opponents, while, on the other hand, interaction protocols (i.e. sequence of actions agents have to follow) may be designed to ensure that agents perform in a trustworthy manner and help in achieving the goals of the system designer.

Research Areas

Provenance Models
This programme of research aims at defining provenance, its semantics, and its representation in computer systems. This requires the definition of data models and associated semantics.
Provenance Architecture
This activity focuses on the realisation of the aforementioned provenance model into a computer system: this includes architecture specification, protocol definition, implementation and testing. Issues of security and scalability are particularly relevant to make a provenance system robust and trustable in industrial or other applications. Furthermore, a provenance architecture needs to be accompanied by a methodology that helps programmers to transform their applications into so-called "provenance-aware" applications. The IAM group is a leader of this activity and has produced an architecture for provenance systems. It is focusing on its realisation in the context of Web services, and its deployment in concrete applications.
Provenance Reasoning
After documentation of execution has been recorded, some reasoning can be applied to the represention of provenance in order to verify that the process that led to some data satisfies some properties. Semantic Web technologies are used to that end.
Provenance Applications
Applications of provenance are multiple, and the IAM group investigates its use in the medical domain (tracking patient data), bioinformatics (e.g, pharmaceutical industry), physics (Atlas experiment of the Large Hadron Collider), chemistry (electronic logbooks), aerospace engineeering.
Trust and Reputation Models
The group is actively involved in a number of research initiatives to develop new trust models, reputation mechanisms, and interaction protocols that ensure the system and the agents can cope with uncertainty. These models and protocols involve the use of Decision theoretic and Game theoretic techniques. In particular, the use of Bayesian Networks and Fuzzy Reasoning techniques to model the reliability of agents as well as the use of Mechanism Design to develop trusted interaction protocols are at the forefront of our research.

Projects