Last modified: 2011-12-17
Abstract
Evaluation activities are highly useful since they can improve and enhance the research domain. That is true also in the newly born field of practice as Virtual Museums. Setting up a good process of analysis and evaluation can have an important impact on the creation phase of a virtual museum, in any of the following fields: virtual heritage and digital assets, interface design, interaction and immersive technology, visualisation tools. How can we effectively build a virtual museum in order to reach certain goals, such as knowledge exchange, cognitive improvement, cultural heritage communication, etc.?
Up to now, we still do not have many extensive studies and statistics, a part from visitors studies, general or more specific in the digital domain such as those focused on web sites or interface design analysis. Consequently it is very difficult to build up a reliable and effective grid of indicators helpful to analyze, study and communicate such kind of results for a real improvement of the research.
Hence, how can we evaluate the success of a virtual museum? Which are the criteria and the parameters we can use as reference? What kind of method, if exist one, we should adopt?
These are the reasons why an European project focused on virtual museums, v-must.net (www.v-must.net), has an entire work package dedicated to quality evaluation through an wide interactive laboratory experiment.
Although a previous attempt has been carried on during the exhibition “Building Virtual Rome” 2005 in Rome (Forte, Pescarin, Pujol 2006), the results of that study has not reached to enough detail, due to a lack of strategy to face the complexity of evaluating and comparing different digital applications. Therefore a second attempt has been carried on in November 2011, within the exhibition of Virtual Archaeology, Archeovirtual 2011 (www.archeovirtual.it), organized in Paestum, Italy.
In this paper we will describe the following issues:
- the object of the evaluation (virtual museum) and its characteristics, based on the work currently in progress in v-must.net;
- the goals of the evaluation and expected results, such as:
- definition of principles to be used to further evaluate virtual museums (i.e. London Charter);
- testing virtual museum categories, as defined by v-must.net project;
- understanding if there is a gap between the visitor expectation and the visitor experience;
- taking into consideration the comparison of similar installations;
- analysing developers aims and comparing them with effective visitors feedback.
- the adopted strategies and the three evaluation methods (observation, short interview, written survey) selected for the specific case study of Archeovirtual, an exhibition of virtual archaeology projects with several different types of virtual museums;
- the survey at Archeovirtual 2011;
- the preliminary results.
References:
Forte M., Pescarin S., Pujol Tost L., VR applications, new devices and museums: visitors's feedback and learning. A preliminary report, in “The 7th International Symposium on Virtual Reality, Archaeology and Cultural Heritage VAST (2006)”, Short Presentations, M. Ioannides, D. Arnold, F. Niccolucci, K. Mania (Editors), 2006