(I can’t get no) satisfaction: Is the National Student Survey just about keeping students satisfied?

In our ‘Critical Perspectives’ series of blog posts, we have asked staff from around the university to write critically about a topic of interest. Our second installment is by Heather Fotheringham from the Learning and Teaching Academy.

What can student surveys tell us about teaching quality?

There is a persistent belief that the results of student surveys do not tell us anything useful about the quality of teaching. The view is that at worst, students complete surveys with a cynical attitude, and at best, students simply do not have the expertise to judge the relevant features of the learning experience that are indicative of its high quality. In this post, I would like to discuss this belief, particularly in relation to the National Student Survey (NSS). The NSS is often described as a ‘satisfaction’ survey and is often blamed for contributing to the increasing treatment of students as consumers or customers (a bad thing) which is leading to a culture where we simply give students what they want (easier courses, higher grades, fewer assessments?), rather than what they need.

I want to explain how the NSS has its basis in a wealth of research linking student perceptions to other indicators of teaching quality. While I will conclude that the NSS may capture nothing further than student perceptions, I will argue that these do tell us something useful about the student experience, if not the quality of our teaching.

The birth of the NSS

The NSS has its roots in studies examining the link between students’ perceptions of their learning environment and the approach that they take to their studies (Ramsden & Entwistle, 1981; Entwistle & Ramsden, 1983). Researcher analysed students’ responses to two surveys; the first asking students about various aspects of their learning experience such as how they perceived their teaching, workload and assessments; and the second asking students about their study habits and ways that they approached study tasks. The researchers found that there were correlations between responses to these two surveys: that students are more likely to adopt so called ‘deep’ learning approaches (such as attempting to structure and understand the syllabus) when the teaching is perceived to be clearly structured and helpful and when goals and standards are clear; ‘narrow’ or ‘minimalist’ learning approaches (such as rote learning for exams) are adopted when workload is perceived to be high and when assessment is perceived to be inappropriate.

Students are more likely to adopt so called ‘deep’ learning approaches when the teaching is perceived to be clearly structured and helpful.  ‘Narrow’ or ‘minimalist’ learning approaches are adopted when workload is perceived to be high and assessment  inappropriate.

Student perceptions as an indicator of quality

The link between perceptions of learning environment and approaches to study discovered by Ramsden and Entwistle led to the thought that student perceptions could act as a proxy indicator for the quality of teaching. While surveys gauging approaches to study are fairly lengthy (such as the Approach to Study Inventory (Entwistle & Ramsden, 1983) or Study Process Questionnaire (Biggs, 1987)), a survey concerning student experience or perceptions could be much shorter, and therefore easier to administer. It was thought that responses to a student perceptions survey could indicate the extent to which students are adopting ‘deep’ or ‘surface’ learning approaches, and thereby achieving quality learning outcomes (Gibbs et al., 1989).

When Australian universities were seeking a national survey, the results of which would be used as a performance indicator for teaching quality, a refined version of Ramsden and Entwistle’s survey was chosen; the Course Experience Questionnaire (CEQ). Further studies on the CEQ also confirmed the correlation between student perceptions and approach to study (Entwistle & Tait, 1990).

The addition of satisfaction

A national trial of the CEQ was conducted in 1989, part of which involved testing the survey for reliability (does the survey deliver similar results in similar circumstances) and validity (is the survey measuring what it purports to measure, namely students’ perceptions of their learning experience). One measure of a survey’s validity is the strength of the relation between the survey responses and some appropriate external criteria. Student satisfaction was chosen as because it was thought that satisfied students would have favourable perceptions of their learning environment (Ramsden, 1991: 135-136).

‘Overall satisfaction’ in the NSS is fixated upon by institutions and by those compiling university league tables. However, it did not feature in the original CEQ and was only added included as a check on the survey’s validity

This is the first of two important points in the evolution of the NSS that are frequently overlooked: currently the ‘overall satisfaction’ question in the NSS is fixated upon by institutions and by those compiling university league tables as a shorthand measure of teaching quality. However, it did not feature in the original CEQ as it was not a question about students’ perceptions of their learning environment. The question was intended as a measure of students’ (perceived) satisfaction that would help to confirm the validity of the CEQ, and nothing more.

From CEQ to NSS

When HEFCE were seeking a national survey for England and Wales, an amended version of the CEQ was trialled. As with the CEQ, the survey was tested for validity before being rolled out nationally. At this stage three items were added to the end of the survey as validating external criteria. These were: ‘Overall I was satisfied with the quality of the course’, ‘Overall I feel the course was a good investment’, ‘I would recommend this course to a friend’. This is the second important point to note in the evolution of the NSS as it is at this point that students’ approaches to study are removed as a validating criteria. Furthermore, the developers of the NSS described these three additional items as validating “the questionnaire as a measure of perceived academic quality” (Richardson et al., 2007: 559).

at this stage of testing items that correlated most highly with approaches to study were removed from the NSS…The NSS was indeed a measure of perceived academic quality, but whether it was anything further was unclear.

This signals a subtle but important shift away from the CEQ as measure of student perceptions of the learning environment, to the NSS as a measure of perceived academic quality. The shift is important because the CEQ was accepted as an indirect measure of teaching quality via the correlation between CEQ responses and approaches to learning. It didn’t matter that CEQ asked students about their perceptions of teaching because these perceptions correlated with other things like approach to study and attainment. Even if it turned out that students were poor judges of teaching quality, it’s the perceptions that count.

The situation is different for the NSS as it asks about students’ perceptions of their learning, and validates these against their perceptions of quality. For the NSS to tell us anything useful about teaching, it does matter whether or not students are good judges.  Furthermore, several items within the original CEQ that correlated most highly with approaches to study were removed from the NSS at the testing stage as they showed little correlation with the three (new) external criteria (these were the items concerning Workload, and Clear Goals and Standards).  Any attempt to ‘inherit’ links to teaching quality from the CEQ should be scrutinised closely. The NSS is indeed a measure of perceived academic quality, but whether it is anything further is unclear.

Satisfaction vs perceptions?

So, we have reached the point where the NSS can be regarded as a survey of students’ perceptions, but we ought not to confuse this with the more pessimistic point of view that I characterised in the introduction to this piece, that the NSS is ‘merely’ a satisfaction survey. We are in a situation in the UK where the item concerning overall satisfaction is regarded as the key indicator of teaching quality (at least by the media, and perhaps by senior management in universities because of this media fixation) but this was certainly not the intention of those who developed the NSS. They were clear that it was a measure of perceived academic quality, and should be seen within the context of a university’s entire public data set which would also include statistics about achievement, employment etc. They also highlighted the fact that “respondents’ scores did not lend themselves to being represented in the form of a league table” (Richardson et al., 2007: 568) and suggested that overall satisfaction scores should be suppressed in order to prevent the ranking of institutions, but that this suggestion “was not adopted” (Ibid.)

So what can the NSS tell us?

Although we must conclude that the NSS is a survey of students’ perceptions it is important to remember that responses to items about teaching (within both the NSS and CEQ) did correlate to deep learning approaches which aimed at understanding, rather than reproduction of the material. It is also these items concerning teaching in the current NSS that correlate most highly with student satisfaction (see Surridge, 2008, and HEFCE, 2014).

In addition, we ought not to be so pessimistic about the ability of students to judge certain aspects of their learning experience. Ramsden points out that students can be accurate judges of whether or not “the instruction they receive is helping them to learn” (1991: 131) countering the position that students’ perceptions cannot reveal anything useful about the quality of the teaching. Whilst students’ ratings of their own satisfaction may not be particularly informative or relevant, as educators, we ought to be interested in whether teaching and curriculum is perceived to be interesting, stimulating, and challenging (as per the first three itemns in the NSS) and concerned when it is not. Furthermore, we need not take negative perceptions of teaching as indicators of poor quality. Often students’ expectations of the content and delivery methods of their course can colour their perceptions of the teaching itself. Perhaps one way to respond to NSS results is not necessarily to alter our teaching, but to alter our students’ attitudes towards it.

References

Biggs (1987) The Study Process Questionnaire (SPQ) users’ manual Hawthorne, Victoria: ACER

Entwistle, N. & Ramsden, P. (1983) Understanding Student Learning. London: Croom Helm

Entwistle, N. & Tait, H. (1990) ‘Approaches to learning, evaluations of teaching, and preferences for contrasting academic environments’ Higher Education 19 (2), 169-194

Gibbs, G., Habeshaw, S. & Habeshaw, T. (1989) 53 interesting ways to appraise your teaching. Bristol: Technical and Educational Services

HEFCE (2014) National Student Survey results and trends analysis 2005-2013. HEFCE. Available at: <https://www.hefce.ac.uk/pubs/year/2014/201413/>

McInnis, C., Griffin, P., James, R. & Coates, H. (2000). Development of the Course Experience Questionnaire. Canberra: Australian Government Publishing Service

Ramsden, P. (1991) ‘A performance indicator of teaching quality in higher education: The Course Experience Questionnaire’ Studies in Higher Education 16 (2), 129-150

Ramsden, P. & Entwistle, N. (1981) Effects of academic departments on students’ approaches to studying. British Journal of Educational Psychology 51, 368-383

Richardson, J. (2005) ‘Instruments for obtaining student feedback: a review of the literature’ Assessment & Evaluation in Higher Education 30 (4), 387-415

Richardson, J., Slater, J., & Wilson, J. (2007) ‘The National Student Survey: development, findings and implications’ Studies in Higher Education 32 (5), 557-580

Richardson, J. & Woodley, A. (2001) ‘Perceptions of academic quality among students with a hearing loss in distance education’ Journal of Educational Psychology 93, 563-570

Surridge, P. (2008) The National Student Survey 2005-2007: Findings and Trends. HEFCE. Available at: <https://www.hefce.ac.uk/pubs/rereports/year/2008/nss05-07findingsandtrends/>>

Wilson, K.L., Lizzio, A. & Ramsden, P. (1997) ‘The development, validation and application of the Course Experience Questionnaire’ Studies in Higher Education 22, 33-53

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s