The case for a common framework of reference for the validation of assessments of written English on English language degree programmes in Europe
Author: Carole Sedgwick
Abstract
The Bologna process carries out an agreement by European governments to create by 2010, a European Higher Education Area with two main degree cycles, undergraduate and graduate, and a common system of credits and quality assurance. In this climate of review, revision and collaboration, this paper describes a survey of existing practice with regard to expectations of attainment on degree programmes in Europe. It will also outline a proposal for a collaborative project to develop a framework for the self-validation of skill assessment on a language programme. The concept of validity informed the design of a questionnaire to collect qualitative and quantitative data on the final assessment of students' written English on English language degree programmes across Europe. Analysis of responses from 30 universities in 12 European countries revealed wide variation between different countries, within the same country and, in some cases, within the same degree programme. As a result of this survey, European partners have been identified to collaborate on the development of a framework for the self-validation of assessments of written English, which can inform the assessment of other skill areas and other languages.
This article was added to our website on 01/02/05 at which time all links were checked. However, we cannot guarantee that the links are still valid.
Table of contents
This paper was originally presented at the Navigating the new landscape for languages conference (www.llas.ac.uk/navlang), 30 June - 1 July 2004.
1. Introduction
The Bologna process carries out an agreement by European governments to create, by 2010, a European Higher Education Area with two main degree cycles, undergraduate and graduate, and a common system of credits and quality assurance. In this climate of review, revision and collaboration, a survey was conducted, in the period June - September 2004, to investigate expectations of attainment in written English on English language degree programmes in Europe.
The concept of validity informed the design of a questionnaire to collect quantitative and qualitative data on the final assessment of students' written English on English language degree programmes, in order to investigate the meaning of the final score as a measure of proficiency in written English. Cyril Weir's framework for test validation (forthcoming 2004) was used to collect validity evidence. A questionnaire was designed to collect five types of evidence concerning validity:
- Theory-related evidence - What theory of writing underpinned the assessment?
- Content-related evidence - What are students expected to do in writing? What task types, text types, topic areas are selected for assessment?
- Reliability evidence - What measures are taken to ensure the reliability of the assessment, e.g., double-marking, internal/external moderation, statistical measures?
- Criterion-related evidence - Have the results of the assessment been compared with an assessment for a similar purpose, e.g., an IELTS or Cambridge main suite test score, teacher or peer assessment?
- Consequential evidence - Have any impact studies been conducted to assess the effect of the assessment method on teaching and learning, performance in other modules, and students' future employment opportunities?
2. Responses to questionnaire
Analysis of responses from 32 universities in 13 European countries revealed wide variation in assessment practices in the sample. Below is a summary of the key ideas that were extracted from the qualitative data that indicate variation in content sampling and underpinning theory.
Respondents | Rationale/aims |
---|---|
11 |
Organising and structuring argumentative texts |
5 |
Learning how to write. |
6 |
Accuracy in writing |
7 |
Writing for a range of purposes and audiences |
4 |
Unsure of the aims/rationale |
2 |
Lack of resources, large student numbers dictated choice of assessments methods |
There was a wide variation in choice of mode of assessment. Most courses were assessed by exam only, s ome by exam and coursework , and a minority by coursework only. The allocation of time for the exam varied from 1 to 7 hours and the length of coursework tasks from 900 to 6,000 words.
There was also a wide variation in response format. From the data below it can be seen that academic topics were the most favoured compulsory topics. However, personal experience and current affairs were the most popular options. Creative writing was the least likely to be selected , although it was compulsory for four of the universities in the sample.
There was equally no clear pattern with regard to measures taken to ensure reliability, although the scoring method was predominantly subjective. The chart below summarises responses to questions about rating procedures. Reliability was problematic in six universities in the sample, where the individual tutors in the department designed, implemented and scored their own assessment. Concerns were indeed expressed by a respondent from one of these programmes that she could be setting harder tasks and marking more severely than her colleagues.
With regard to criterion-related validity, although three writing programmes attempted to benchmark their assessments to the CEF levels, only one university had formalised this in a project with Cambridge ESOL (Duguid, 2001).
No impact studies were reported, although universities that adopted a process approach focusing on formative assessment, reported a positive effect on teaching and learning. However, from some preliminary results in the table below it is possible to discern a wide variation in the impact of the assessment results on the language and the degree programme. Language constituted from 5% to 27% of the marks on the degree programme. Writing constituted from 17% to 100% of the language programme, depending on the priorities of each English language programme.
University | Language as a percentage of the total assessment |
Writing as a percentage of the language assessment |
---|---|---|
Belgium Fl |
5 |
17 |
Belgium Fr |
25 |
25 |
Germany 1 |
23 |
100 |
Germany 2 |
No response |
25 |
Italy 1 |
12 |
20 |
Italy 2 |
15 |
33 |
Netherlands 1 |
14 |
19 |
Netherlands 2 |
11 |
50 |
Poland |
27 |
33 |
Portugal 1 |
13 |
70 |
Portugal 2 |
19 |
50 |
Spain 1 |
15 |
30 |
Spain 2 |
5 |
45 |
There was also wide variation in the extent to which other courses on the degree programme were assessed through the medium of English. 15 respondents reported that all courses were, but others reported that only English language or courses in e.g. English Literature or Cultural Studies were assessed in English, depending on the particular level of proficiency and/or confidence in English of the course tutor.
3. Conclusions
In conclusion, although this small exploratory study could not claim to be representative of assessment practices on English language majors in Europe, it nevertheless indicates a wide variation in the evidence that is produced to justify the score on the final assessment of writing and, as a consequence, in the meaning that could be attributed to the assessment outcomes between and within countries, and, in some cases, within the same degree programme.
As a result of the survey, European partners have been identified to collaborate on the development of a framework for the self-validation of assessments of written English on English language degree programmes, which can inform the assessment of other skill areas and other languages. The project will involve the development of a framework for the validation of a language skill from the a priori to the a posteriori stages in assessment design. The aim is to help practitioners and students by creating common criteria for the validation of the assessment of English language writing skills at degree level, which would be rigorous, transparent, fair and workable. The project would contribute to the Bologna process by providing a model for quality assurance in the assessment of language skills in Europe that does not currently exist.
Bibliography
Duguid, A.(2001) Anatomy of a Context: English Language Teaching in Italy , London: Granville publishing
Weir, C. (forthcoming 2004) Language Testing and Validity Evidence , London: Palgrave
Related links
Description of the Bologna Process
www.coe.int/T/E/Cultural_Co-operation/education/Higher_education/Activities/Bologna_Process
Humbox
The Humbox is a humanities teaching resource repository jointly managed by LLAS.