Guide to languages, linguistics and area studies in the National Student Survey

Author: John Canning

Abstract

The National Student Survey is a census of final year undergraduate students in the UK. Conducted since 2005, the 2009 survey asks students 22 questions about their learning experience at university. Each institution's results are broken down by discipline and made publicly available on the Unistats website.

This article was added to our website on 05/03/10 at which time all links were checked. However, we cannot guarantee that the links are still valid.

Introduction

The National Student Survey is a census of final year undergraduate students in the UK. Conducted since 2005, the 2009 survey asks students 22 questions about their learning experience at university and students agree or disagree with each statement on a 5-point likert scale*. These questions are divided into six categories plus one further question concerning overall satisfaction (Question 22). This overall satisfaction question is usually the one which compilers of league tables use. There are further optional questions which institutions may choose to ask, but the responses to these are not made public. There are further questions for students on practice-based courses in health sciences. This is also an opportunity for students to make free text comments concerning their course-again these are not made public.

Results from the survey are available on an institutional and discipline basis. The percentage of students giving positive responses to the questions is higher in humanities subjects than most other disciplines (e.g. see Porter 2010). It is likely therefore that the overall institutional scores will depend to a certain degree on the mix of subjects that the institution teaches.

Whilst data from the NSS is inevitably used to produce league tables, it is primarily presented as an improvement and public accountability tool. Whilst the data is publicly available by institution and discipline and can be used to produce league tables, the Subject Centre does not believe that it is beneficial to publish the data in this form.

The NSS uses the discipline as the key unit of the analysis. The following disciplines in the NSS relate to subjects covered, or partly covered by LLAS.

  • American and Australasian studies
  • Asian studies
  • Celtic studies
  • Comparative literary studies
  • Linguistics
  • Modern Middle Eastern and African studies
  • French studies
  • German and Scandinavian studies
  • Iberian studies
  • Italian studies
  • Others in Eastern, Asian and African languages and area studies
  • Others in European languages and area studies

Uses of the NSS

  • The survey can be used by institutions and departments to identify areas of strength and weakness.
  • It enables institutions and departments to make easy comparison with their comparators and competitors.
  • The results offer a starting point from which to discuss ways of improving the student learning experience. However, they ought not to be used uncritically as the basis of major changes in the curriculum or curriculum support.
  • The NSS can be used to help bring student experience “into line with the ways in which we design and teach and courses” (Prosser 2005). For example, the areas with the most negative student experiences are those concerned with feedback, but “This may not be because they are not getting sufficient feedback or because they do not appreciate the feedback they are getting. So the issues may be to help students better understand the feedback they are getting” (Prosser 2005).

Considerations

The following points need to be considered when using the data:

  • In some subject areas there were too few responses to warrant an institution’s inclusion in the public results. To be made public a minimum response rate of 50% and 23 students is required. Therefore subject areas with relatively few students will be excluded. This is likely to have an impact on Less Widely Taught languages and area studies.
  • It is not possible to identify specifically the responses of students studying some Less Widely Taught languages. For example students studying Chinese and Japanese have been placed in Asian studies. Therefore, it is not possible for institutions which teach both to differentiate between the two groups of students.
  • The disciplinary categories are based on JACS coding, so do not necessarily match the names given to actual programmes of study (e.g. there is no specific category for European Studies). Institutions can also ‘override’ the JACS code and choose the category in which responses are recorded.
  • Relating to the above, the ratings are based on disciplinary areas and do not provide any insight into student satisfaction with subject specific content or individual components of their programme (e.g. individual modules).

Methodological considerations

  • The survey only contains positive statements, which in the view of some commentators may lead to more positive responses than would be the case if negative statements were used as well (Yorke 2009).
  • The survey is administered by telephone, online and in paper form. Students who responded by telephone have been shown to give more positive ratings than those who filled in the survey online or in paper copy (Yorke 2009).
  • It has been argued that the students filling in the survey have little incentive to be negative about their institution, as the reputation of the institution (and therefore their degree) may be threatened (Yorke 2009).
  • The scores received are amenable to being placed in league tables. However, this is more a criticism of way the data is used by others than the survey itself.
  • In the case of joint honours students their responses are divided equally between the two subjects. E.g. the responses of a student studying English and French will be allocated 50% to English and 50% to French. For subjects in which there are a lot of joint honours students, there will be a lot of ‘noise’ from the other subject area.

Data from the NSS ought to be triangulated with data from other sources, e.g. module evaluation questionnaires and internal surveys of the student learning experience.

What LLAS can do?

We are happy to tell you the results for individual subject(s) at your own institution results and tell you which quartile and position you are in terms of rank for each discipline and each question. We are only able to give you data which can be derived from data in the public domain.

Other services

LLAS has recently begun to offer a programme of consultancy for LLAS departments. Discussion of issues arising from the NSS may be a part of this.  

Using the tables

Download NSS tables (Word doc, 1Mb)

The results tables are presented in two sections. The first section presents each question and the proportion of students who agreed or strongly agreed with the statement for each LLAS discipline. The second section presents each discipline area and provides the same data for each of the 22 questions. As well as providing the mean, the median score and quartiles are also given in cases where there were than ten or more institutions reporting under that discipline. 

Glossary

  • Mean (average): The most widely used average is calculated simply by adding all the percentage scores together and dividing by the number of institutions.  
  • Median (average): When all the scores are ranked in order, the middle one is called the median. If a discipline has 11 institutions the median institution is the one ranked sixth
  • Lower quartile and upper quartile. The quartiles mark the 25 % boundaries. For example if your institution scores 85% for overall student satisfaction and the lower quartile is 86%, then your institution is in the lower (bottom) 25% of scores. In short, it tell you whether you are in the top 25%, the middle 50% or the lower 25%.
  • The inter-quartile range is the difference between the lower and upper quartiles. This is a useful statistic for showing how much difference there is between institutions generally. For example if the inter-quartile range is 5%, this would indicate that there is, generally speaking, not a lot of difference in score between the higher scoring institutions and the lower scoring institutions. However a 15% difference would suggest that scores are a lot more spread out.

The quartiles and inter-quartile range have only been calculated for subject areas where ten or more institutions reported.

Footnote

*Strongly agree, agree, neither agree nor disagree, disagree, strongly disagree.

Bibliography

Brown, R. et al. (2007). Recent Developments in Information about Programme Quality in the UK - PB - Routledge,” Quality in Higher Education 13 (2), 173-186. Article which considers the NSS alongside other data which is collected on teaching quality.  

Flint, A. et al. (2005). Preparing for success: one institution's aspirational and student focused response to the National Student Survey. Teaching in Higher Education 14 (6), 607-618. Outlines Sheffield Hallam’s institutional response to the NSS. 

Marsh, H.W. and Cheng, J.H.S. (2008) Dimensionality, multilevel structure, and differentiation at the level of university and discipline: Preliminary results. Available from The Higher Education Academy  Detailed statistical analysis of the NSS.

Porter, A. (2010) NUS Student Experience Report and language students. Liaison 4, Pages 38-39

Prosser, M. (2005) Why we shouldn’t use student surveys of teaching as satisfaction ratings. Available from The Higher Education Academy Short piece from the former Director of Research and Evaluation at the Higher Education Academy.

Yorke, M. (2009) ‘Student experience’ surveys: some methodological considerations and an empirical investigation. Assessment & Evaluation in Higher Education 34 (6), 721-739