Angus Roberts | Mancheter Metadata Group |
Robert Stevens | Mancheter Metadata Group |
Mark Greenwood | Mancheter Process Group |
Chris Wroe | Mancheter Metadata Group |
Carole Goble (absent for some) | Mancheter Metadata Group |
Jeremy Rogers | Mancheter Metadata Group |
Phil Lord | Mancheter Metadata Group |
Barry Tao | Southampton Knowledge Group |
Dean Hennessey | Epistemics |
Tim Clarke | Epistemics |
Paul Smart | Epistemics |
What unites and what divides us?
Some of the differences are:
We have perhaps underplayed the relationship between our ontologies and applications / implementations, we are not really application independent, and are tied to required applications to some degree
Constraints on our projects are often needed to ensure our client finances projects, so we produce standard models and documentation in a very methodological way. We are very deliverable centred, producing something for them to understand
It's like software: I produce standard models and documentation when writing software.
There seems to be some comparison between what you do and software engineering methodologies, whereas our ontologies are built with a much less formal methodology.
This is because ontology building is an immature subject, so the methodology is not so structured and defined
And much of what we build is research toys. Our deliverables are journal papers
Many of our projects are really feasibility studies. As soon as we move over to a more production setting, as with the drug ontology, then deliverables and documentation does increase. Also, loose ontology building is not specific to this field.
Yes - documentation and provenance is often poor
The methodology's idiosyncrasies are not important for Geodise. What's important is a clear way of eliciting and communicating knowledge
There is an issue is how can we co-opt our respective technologies, in particular models from Knowledge Acquisition to building ontologies
There seems to be no reason why our knowledge models could not be delivered in DAML+Oil.
Does the same person carry out acquisition and modelling?
Yes - Epistemics is not large enough for this degree of specialisation, but there is some division of labour
But how would you square your asserted hierarchies with our descriptive approach for delivery in DAML+Oil?
There are lots of ways of representing knowledge - the key is how you map that on to a modelling formalism - it might take more work, but it could be done
Doesn't the expressiveness of the formalism affect your Knowledge Acquisition?
No, Knowledge Acquisition techniques and knowledge acquired are independent of the final formalism and its expressiveness
How do you store acquired knowledge prior to modelling?
Audio tapes, videos, and if using contrived Knowledge Acquisition techniques such as those in PC-Pack, (i.e. psychology techniques), then they are stored in PC-Pack's own format. We use all of this for further analysis and modelling. Based on this analysis and the requirements, we elicit more knowledge, i.e. it involves iterative feedback
Doesn't this mean that your Knowledge Acquisition is limited?
Surely the formalism here is natural language, so it is expressive
Is there ever any direct generation of knowledge models from PC-Pack Knowledge Acquisition?
Yes - we are working on this, and looking at putting semi formal PC-Pack Knowledge Acquisition into CML, and perhaps implementing that directly.
There seems to be no reason why you shouldn't transform Knowledge Acquisition to DAML+Oil from PC-Pack, to give you the foundation of a model
Doesn't this mean you are restricting how you acquire by the final formalism?
No - as long as you don't have the conversion process in mind
What do you think of the reasoner? Did you get to the stage of inferring a subsumption relationship in the exercises?
Yes - but they were all obvious errors I had made
But were you thinking "that's not what I meant, this is a mistake in my description", and then going back to fix it?
Yes
I can see how you might use reasoning in the techniques Epistemics use, except perhaps our modelling is a bit broader - we would model what you see as simple typed attributes as concepts in their own right - such as colour
Yes, often colour would be a simple attribute, but that would depend on the final reasoning that you wanted in the final implementation
What more is there in DAML+Oil over and above what we have seen already?
You've seen most of it in the workshop
The reasoner associated with DAML+Oil is a good idea. It made us think how we might do meat analysis of Knowledge Acquisition itself, and build in support for Knowledge Acquisition in to a tool. The basis of this was watching Clive build a small model, and the system making inferences about relationships between concepts he had put in. As a knowledge engineer, you miss things in complex domains - the reasoner helps you spot things
I'm interested in Knowledge Acquisition in CommonKADS and its use with DAML+Oil. We use the reasoner to help build and model ontologies. It would be interesting to see how different it would be over time with an Epistemics approach. The reasoner becomes very important for helping you find missing things. You end up with a different style of modelling, collecting classes and properties in to the ontology, building structures and then getting the hierarchy for free from the reasoner
I think that maintaining multi axial hierarchies is hard otherwise
How big are your models? 600 concepts alone would be tricky
150 for an ontology of drug forms and routes is tricky. It was used to agree on common vocabularies, across multiple vendors, to map between them all. It took two years to try to model this manually, but it took two weeks in a Description Logic, and now they are using that as the basis for standardising drug forms and routes.
So it's not just useful for big ontologies - also for small ontologies
We are all essentially involved in the same activities, but there is methodological and terminological variation
The Guus Schrieber CommonKADS tutorial slides don't use the word ontology, but I think he could have used it as a word for knowledge models. Why not use the word?
There are constructs for using ontologies, it is a bit of a research topic in the Netherlands at the moment, using simple textual mappings between domain schemas and ontoloy terms. I will ask them how they are progressing with that
We used a specific upper level ontology in the tutorial. Is it one you could see yourself using normally?
We've never worked in a domain where it has been important to have an upper level ontology. They are quite circumscribed problem domains - we have to focus our resource on the specific functionality to be delivered, and there is no real time to model that kind of detail.
An upper level helps you organise, and not get in a tangle with this style of modelling
It helps us integrate two ontologies if they have similar upper levels
In the CommonKADS course we plan to run, what would you like to see?
Particularly Knowledge Acquisition - how to get knowledge from domain experts, as we have always employed them. We would like to look at separating the tasks
And using ontologies to describe tasks (this is relevant to web services), so how you do task decomposition would be interesting
How you do templates might be useful. You talk about tasks and task methods - how you model and decide which of these is which, and how you distinguish between them would be useful
So what is a task and what is a sub task, and how you make those modelling decisions
It is similar to other decompositions in software engineering methodologies, where it is informed by the application task at hand
We should gather some of our problems and ask you how you would do them
We usually work with domain ontologies, but now we are looking at modelling taks - we need to decide what is appropriate to model in an ontology, and what is relevant to model elsewhere
Modelling styles and templates
Your knowledge modelling and that kind of thing, how it compares to ours
I will give you a demonstration of a knowledge model
(PS gives a demonstration of a CML model in its delivery format)
Here is one delivery format for a model - as HTML
The web site breaks the model down in to various constructs:
Here we are looking at a meta model of CommonKADS for training and for developing tools to move from a formal spec to code
Here is the domain schema
Rather than a flat set of rules, we try to organise them in to rule sets with something in common.
We try to group rules together in a maintainable way
Is this based on types? On contained knowledge? What is the grouping mechanism?
It is not contained knowledge - rather the common role they play in the reasoning process
Is there any rule reuse?
The properties of a rule type constrain the knowledge types that could participate in that rule
There are rule types that represent constraints on what properties a concept can have, and implication rule types that represent logical relationships between concept attribute values
The rule is represented in CML, documentation is generated from the CML
Arguments of a rule type are domain schema elements. The general rule type is if something has a property then something else is true
The validity of model is that it can realise the functionality required by the client
Do you have computational support for checking errors and mistakes?
We have software that checks for syntactic errors, nothing for the structure of the model. We are currently developing knowledge editing applications which constrain input to valid models
So checking is implicit in the application
The idea behind my analysis work for the CommonKADS meta model is to help with automatic Knowledge Acquisition and to provide this kind of support to help in creating and editing model - which is why I'm impressed with OilEd's use of reasoning
What if two experts tell you inconsistent things? Clearly you need to reconcile this. Computational support would help here. This Will be a big problem in biology / myGrid
Do you typically ask lots of expert sources?
Usually 2 or 3 - inconsistencies are presented to them all at a plenary to decide the best way to proceed
But how to you find these inconsistencies?
It is differences in opinions on rules. Usually these things are opinion, not a difference in the knowledge model. We use structured walk throughs and partial prototypes for checking differences in the knowledge model
So you ask the domain expert?
Presumably you don't model uncertainty? Your rules seem very clear cut.
There is separate Knowledge Acquisition devoted to finding certainty of information from a source, this is then fed in to explicit rules
So it is weighted decision rules?
Yes - how does this compare to you?
We don't do decision support.
(discussion of decisions and their granularity, and measures of disambiguaty)
But it is beginning to creep in with the selection of web services
We do some stuff in gene expression - naive Bayesian nets, but we have never coupled this to ontologies
useful to look at diagnostic clustering in medicine
What you are doing is Knowledge Acquisition that can help ratify Knowledge Acquisition, for us what is interesting is how we might link in our reasoning to give instant feedback on this
Yes - it would help with the heavy cognitive load in Knowledge Acquisition, and in doing real time modelling in Knowledge Acquisition - intelligent support tools would be extremely useful
Real time modelling would be slow
But if it were a two stage process, you could take back the model to show to the expert
But you don't need to - PC-Pack could have it built in
There seems to be lots of potential for cross fertilisation between us and you
Also in myGrid
Particularly a full discussion of modelling "task"