DSSE
Home | People | Research Projects | Publications | Techreports | Tools | Vacancies | Seminars
This is now an inactive research group it's members have moved on. You can find them at their new research groups:
Vacancies

DSSE:Vacancies

PhD Topics 

We have the following open PhD projects. We also welcome other PhD proposals in our areas of research. Naturally, we welcome application from within as well as outside the UK.

If you are interested in any of the projects, please send Michael an e-mail. He will forward your request to the potential supervisor of the PhD-project or answer your questions directly. 

For application details, see the ECS Postgraduate Admissions pages.

UML-B – Formal modelling with UML

The emergence of the UML as a de-facto standard for object-oriented modelling has been mirrored by the success of Abrial’s B method as a practically useful formal modelling technique. The two notations have much to offer each other. The UML provides an accessible visualisation of models facilitating communication of ideas but lacks formal precise semantics. B, on the other hand, has the precision to support animation and rigorous verification but many software engineers find the notation difficult to learn, visualise and communicate. We have developed a profile of the UML called UML-B that provides a, UML based, formal modelling notation supported by an action and constraint language derived from the B method. Tool support for validation and verification of UML-B models is provided by translation to B using our U2B tool. This enables existing B analysis tools to be used. An important challenge is the integration of our tool with other analysis tools for B. Another important challenge is extending the range of patterns and idioms supported by UML-B and U2B to help increase productivity when using a UML-B approach.

Refinement of Distributed Services

The goal of this line of work is the development of tools and techniques for the formal development of distributed services. The approach uses models at different levels of abstraction related via refinement. The abstract models are close to the end-to-end requirements on the service, while the concrete models represent designs and implementations. A typical formal development project may involve several levels of abstraction. Refinement ensures that the design or implementation is correct with respect to the abstract model. The models are written in the B notation. An important challenge being addressed is the development of tools to support the construction of models and the verification of refinements. Another important challenge is the development of design patterns for modelling and refinement of distributed services. The use of design patterns enables greater reuse of modelling and refinement knowledge and proofs between projects. The tools and techniques are being applied to a range of services including web services and distributed information systems in collaboration with industry and other academic partners.

Coalgebraic Modelling and Refinement

Within the coalgebraic approach to modelling systems, the concept of bisimulation provides a generic and canonical notion of system equivalence, while the weaker notion of simulation can be tailored to capture various notions of system refinement. This work aims to study the expressive power of coalgebraic simulations in relation to notions of refinement of interest in formal modelling and verification, and to derive coalgebraic/coinductive techniques for reasoning about refinement.

Visual formal models

In the context of building models of systems, it is assumed by many that formal implies difficult and obscure but it need not be the case. This work is concerned with finding visual representations which, in combination with suitable tool support, can allow "non formal method savvy" users to build and analyse models which have formal underpinning.

Formal methods and refinement for generic and product-line requirements engineering

Software reuse through component-based development addresses the need for increased productivity. More recently, software product line engineering has addressed the problem of concurrent development and support of families, or product lines of software products, where significant commonality exists. We have developed a Requirements Manager database tool for the management and validation (constraint-checking) of product instance data, interacting closely with UML-B. This theme aims, with 1.2 above, to develop specification and formal verification capabilities across the product line as well as for each specific product instance.

Model Checking for B

In collaboration with researchers at the University of Dusseldörf, we have developed the ProB model checker for the B Method. This powerful tool can automatically find inconsistencies in B specifications and designs. If unchecked, such inconsistencies would lead to design and implementation errors in systems under development. An important challenge being addressed is improving the performance of the tool through the use of optimisation techniques such as symmetry and partial order reduction. Other important work is the integration of the ProB tool with other analysis tools for B.

A Modular Approach to Model-Based Verification

Existing methodologies and tools for model-based verification each focus on a particular semantic model of computational systems, and do not exploit relationships between different such models. In particular, no support is currently available, either on a theoretical level or in terms of tools, for reusing existing verification methodologies across dependencies between different semantic models, or for combining verification methodologies that exist for simpler semantic models in order to derive similar methodologies for more complex models. Our research attempts to fill this gap by developing the theoretical underpinnings of a modular approach to model-based verification. This requires a unified account of models and logics currently used in model checking, the provision of modular techniques for deriving new logics and associated verification methodologies, and of abstraction-based techniques that allow the reuse of existing methodologies and tools.

Formal Software Safety Certification

Software certification demonstrates the reliability, safety, or security of software systems in such a way that it can be checked by an independent authority with minimal trust in the techniques and tools used in the certification process itself. It is based on explicit certificates, which contain all the information necessary for an independent assessment of the demonstrated properties. Certificates thus combine different techniques and forms of evidence (e.g., fault trees, safety cases, and formal proofs) and link them to the details of the underlying software. We are particularly interested in formal approaches to software safety certification, similar to proof-carrying code. Here, our focus is on the certification of automatically generated code. We investigate how knowledge that is available at code generation time or can be derived from properties of the generated code can be used to improve the certification process.

Coalgebraic Models and Logics

This line of work attempts to study semantic models, calculi and specification logics for various classes of dynamical systems, including concurrent, probabilistic, and mobile systems, from a coalgebraic perspective. The generality of coalgebraic models and their close connection to modal logic make such an approach highly promising. In particular, this study is hoped to provide coalgebraic models that capture spatiality and mobility aspects of global computing systems, together with canonical specification logics for these models. The resulting models could subsequently be used to provide operational semantics for various existing calculi for mobility.

Structural Logics

The goal of this line of investigation is to describe and reason about structured, distributed, dynamic resources. A typical point in case is the recent flourishing `spatial’ and `separation’ logics, which use modal operators to inspect the structure of their object model in perfectly analogy with the way `temporal’ logics use modalities to express time-dependent and behavioural properties, A spatial logic may for instance provide a way to express that a particular property holds in a specific site in a distributed system, or that it holds everywhere in the system. The single most important feature of such logics is their ability to `separate’ their objects in independent components, which affords great expressiveness and powerful proof techniques. Our research will focus on developing logical frameworks based on such feature for applications to the analysis of resource usage and distribution in concurrent computing systems (both natural and artificial), and for underpinning business processes (again, both natural and artificial).

Observational and Operational Equivalences for Computing Systems

Ever since the lambda calculus, the natural notion of equivalence for computing systems consists of observing behavioural differences between systems experimentally, by analysing them in arbitrary computational contexts. On the other hand, some of the most important operational techniques – developed through the past thirty years of research in concurrency – involve labelled transition systems and the accompanying notions of operational preorders and equivalences in order to reason about processes. Bisimulation is chief among these. The aim of this research is to bridge these two views by providing a systematic way to associate an appropriate bisimulation to each computational framework whose operations can be described in terms of contexts and reactions. Besides the practical interest vested in bisimulation-based techniques, this research has foundational merits too. One of them is to aim at theories of general scope, `meta-theories,’ that can be often be transliterated to specific models. Among the future directions we plan to focus on theories for stochastic systems and system biology.

Foundations of Autonomous Systems

Autonomy is fundamental to many recent visions of future computer systems, including the ambient intelligence vision of computers working proactively to support their human users and the autonomic computing vision of self-managing computers. Autonomous systems exhibit context-dependent behaviour to fulfil specific goals and are characterised by their degree of independence in making decisions and adapting to unforeseen environmental conditions. Our long-term objective is to lay the foundations for the design of future large-scale autonomous systems that verifiably satisfy specified goals. A particularly fascinating perspective is the use of autonomy as a design paradigm for computing systems that must exhibit a high degree of adaptability during their lifetimes, e.g., to address the challenges of security, reliability, scale, and resource management. Shorter-term objectives include the development of calculi and models for context-awareness and resource negotiation for mobile systems.

Reliable Model-Based Code Generation

Automatic code generation from high-level specifications (also called program synthesis) cuts across the top of the V-model shown above and has significant potential to improve the software development process: benefits include higher productivity, reduced turn-around times, increased portability, and elimination of manual coding errors. It is currently enjoying renewed interest, due to the industrial model-driven architecture initiative. However, building or modifying reliable code generators for non-trivial application domains is a knowledge-intensive and time-consuming task. With our research, we try to simplify this expensive task. We investigate methods to formalize the underlying generative domain knowledge, to specify transformations and to prove them safe or even correct, to adapt the generators to different computational environments, and to control the code generation process.

Scaling up formal development

This research theme addresses two key components of success in a scalable software development method: integration with the requirements engineering process, and compositionality. It is essential that a B-based method for correct software construction integrate seamlessly with the prior (and sometimes, overlapping) gathering of system requirements. We are undertaking methodological investigations into the integration of established requirements engineering methods, such as van Lamsweerde’s KAOS, and Jackson’s Problem Frames, with B and UML-B development. For demanding requirements situations we extend the modelling expressivity of the refinement method using the recent retrenchment approach. For scalability and compositionality, we are investigating the development of existing compositional mechanisms in B with feature-oriented requirements specification.

Global Computing

The internet has meant that distributed computing has evolved from within-the-enterprise to between-enterprises and now very general distributed solutions are required to work between enterprises across the globe. In particular, Web Services offer the prospect of developing globally distributed systems in which participants join and leave the system in an ad-hoc way. The Grid is an early example of this form of flexible distributed system where participants can make use of computers and databases in remote locations to carry out extensive computations for which they do not have the right resources locally. Research in this area includes interoperability between heterogeneous systems, security, architectures and dependability. Much of this research is carried out in conjunction with the Open Middleware Infrastructure Institute (OMMI) which is a major project in the School developing its own distributed computing infrastructure (so-called middleware).

Grid middleware for Science and Engineering

There has been much investment on both sides of the Atlantic in developing Grid middleware in support of Science and Engineering. This promises to offer every practitioner a flexible environment, in the office or on-the-go which allows innovation to proceed in an uninterrupted flow using data, information and knowledge from multiple sources processed by unlimited on-demand computational resources. We are very proud to be collaborating in the Microsoft Institute for High Performance Computing, based at Southampton. With the Microsoft Institute, and in collaboration with staff at iT Innovation, we are developing new frameworks and technologies to deliver secure, effective frameworks for HPC on mainstream computing platforms. Recent work within DSSE has included the use of Semantic Web technologies, including RDF Named Graphs to build secure and reliable provenance repositories which can be queried using Description Logics. We are working on ways to improve the reliability of composition of distributed Grid services, for example by associating temporal logic and other semantic information with the service descriptions. Throughout this work, our focus is on developing tools and technologies that will be used by real scientists with a minimum of IT training; we try not to flaunt the theoretical underpinnings too publicly.

Foundations of Ubiquitous Computing

The scale and complexity of ubiquitous computing systems make them a formidable object of scientific investigation and experimentation. Because of their intrinsically heterogeneous nature understanding and analysing their structure and dynamic adaptation will be a challenge: as they will become more and more prominent in our lives, ubiquitous systems must provide assurance in terms of dependability, predictability, and much besides. The overall objective of this research is to develop rigorous models to underpin such assurance and analysis, which will form the foundation for design and engineering of ubiquitous systems and the experience of their users. From our viewpoint, the fundamental questions to address include: how to organise the elements of a system to allow their interaction, context-awareness, and self-reconfiguration; how to understand the way information flows inside a system; how to gain confidence in system performance. Our immediate activity will focus on traits endemic in ubiquitous computing, such as stochastic behaviour and uncertainty, space and mobility, trust and security.

M-grid

This work is concerned with the development and exploitation of M-grid which is a lightweight computational grid implemented which uses Java applet technologies in place of the usual heavyweight GRID middleware permitting it to be set up and used with an absolute minimum of formality. The simplicity of M grid potentially makes it attractive for use in initial evaluation of work which may later require the full power of a traditional GRID solution or teaching.

Reasoning about distributed database systems

In distributed database systems data is replicated on different sites in order to improve system performance and system availability. Ensuring various degrees of consistency between replicas requires the use of various mechanisms for controlling and propagating changes. The design of these mechanism is usually complex and difficult to get right. In this line of work we look at the use of a formal refinement based approach to the design of distributed database systems. The emphasis is on developing an abstract understanding of distributed database transactions and then reasoning about the relationship between the abstract view and the complex mechanisms found in distributed database designs.

Managing Data (In)consistencies

With continued proliferation of computer systems, it is now common for data to be held in multiple locations which relates to the same artefacts of the real world but when the data from two or more of these sources are compared discrepancies and inconsistencies appear. These may arise from errors which need to be corrected but often they merely reflect differing outlooks, assumptions orpriorities of the systems concerned. Recognising that it is not feasible to eliminate such inconsistencies, this research is seeking to find techniques which permit systems to manage these problems and make acceptable progress notwithstanding having to operate in an environment where they cannot avoid problematic data.

Trust and Reputation

Trust is a fundamental concept in human behaviour, and has enabled collaboration between humans and organisations for millennia. Our goal here is to transfer such forms of collaboration to modern computing scenarios. Trust between autonomous agents will be an important ingredient in open-ended systems, where traditional security mechanisms have obvious limitations. Certain agents will be responsible for allowing others to cross boundaries, or for allocating resources to them, and such safety critical decision will be made based on trust policies and trust management systems. A discipline of trust between agents will only be effective if it is rigorously defined. Logics and languages have been proposed for expressing such disciplines, in terms of notions such as belief and authority – together with reputation systems that record, aggregate and distribute information about the past interactions with the agents in question. A fundamental element here is the development of effective model checking techniques for such logics, so as to empower agents to query their trust policies efficiently. We are developing formal calculi to study the dynamics of trust and the evolution of webs of trust. Short-term applications of this work include online auction systems (à la eBay) and automated negotiation of third-party resource usage between computing agents.

Software Certificate Management Systems

A software certificate management system provides the infrastructure to create, maintain, and analyze software certificates. It combines functionalities of a database (e.g., storing and retrieving certificates) and a make-tool (e.g., incremental re-certification). It can at any time provide current information about the certification status of each component in the system, check whether certificates have been audited, compute which certificates remain valid after a system modification, or even automatically start an incremental recertification. It can also maintain links between system artefacts (e.g., design documents, engineering data sets, or programs) and different varieties of certificates, check the validity of certificates, provide access to explicit audit trails, enable browsing of certification histories, and enforce system-wide certification and release policies. Our research focuses on the fundamentals of software certificate management systems (which are still an emerging concept), and their integration with automatic code generators.

 

Learning Systems Engineering and Serious Games

Systems that support all the stages of learning and teaching, resulting in blended methodologies that not only help with existing practice, but can result in novel practices. We are investigating Architectures for Learning Systems and for Assessment, Design for Learning, Competency modelling,  m-learning, and Serious Games.

Empirical Software engineering.

We investigate the use of empirical software engineering techniques in organisations, specifically looking at procedures and process around the use of  Agile, iterative and evolutionary techniques. As well as investigating more traditional methods of software engineering we also look at concepts such as co-design and co-deployment.. 

ICT4D

Information and communications technologies for development, engineering system to operate in less economically developed countries to bridge the digital divide in order to help communities change. This is a partnership between Communities, Government Industry and Academia with partners in the UK and overseas. This is jointly with web science particularly when investigating digital literacy and the impact of the mobile web.

Cloud, Grid and Distributed computing.

Cloud computing is an emerging business model that delivers computing services over the Internet in an elastic self-serviced, self-managed, cost effective manner with guaranteed Quality of Service(QoS). Grid computing is about large scale computation and data to solve large scale problems.

Industrial and Commercial systems.

We undertake applied research into  systems for industry, commerce, and medicine.    This includes knowledge management and semantic web, enabling people to find relevant and similar information effectively, and virtual research environments for clinicians to  collect, collate, analyse and publish trial data within a trail governance process.

 

 

© School of Electronics and Computer Science of the University of Southampton