Student Evaluations of Teaching as 'Fact-Totems': The Case of the UK National Student Survey
by Duna Sabri
King''s College, London
Sociological Research Online, 18 (4) 15
<http://www.socresonline.org.uk/18/4/15.html>
10.5153/sro.3136
Received: 19 Mar 2013 Accepted: 26 Jun 2013 Published: 30 Nov 2013
Abstract
Taking the UK National Student Survey (NSS) as a case study of student evaluations of teaching (SET) which are now used widely in higher education, I argue that the production and consumption of such survey data have a symbolic value that exceeds, and is often independent of, any technical understanding of their statistical meaning. The NSS, in particular, has acquired significance that far outweighs its validity or intended use. This is evident in national policy where it has become the primary measure of 'the student experience', ostensibly articulating current students' views, and giving prospective students – as consumers – information to help them choose between courses. Higher education institutions now allocate resources to improving 'the student experience', as defined by NSS results. Their desire to improve NSS results has come to redefine higher education work and relationships between students and academics, academics and managers, and students and institutions. Moreover, NSS results and universities' relative positions in NSS scores have become 'fact-totems', a site of intense social attention within universities, provoking anticipatory anxiety, and becoming embedded in universities' identity narratives. Alongside an analysis of the policy structures that perpetuate the NSS at national and institutional levels, I draw on two studies conducted within one UK university to examine at a micro-level the meanings and practices that can be generated in the production and consumption of the NSS for students, academics and managers in higher education.
Keywords: Sociology of Statistics, Higher Education Policy, Student Choice, University League Tables
Introduction
1.1 Student evaluations of teaching (SET) are used universally in higher education (HE) in developed countries. The extent to which SET instruments are standardised for all universities in HE systems varies. Australia, the UK and the Netherlands have national student surveys with near universal participation at institutional level. The US has the National Survey of Student Engagement (NSSE) which attracts approximately a quarter of the country's HE institutions. Germany has had a national survey since 1983 which runs every two or three years with around 25 of the country's universities taking part. Elsewhere in continental Europe institutions seem to be developing their own SET instruments. It remains to be seen as to whether an instrument will be developed or adapted for use across the European Higher Education Area.1.2 The UK has had its National Student Survey (NSS) since 2005 and, on the recommendation of a review (Ramsden et al. 2010) commissioned by the Higher Education Funding Council for England (HEFCE) is to keep it unchanged until 2015. HEFCE is undertaking a further review during 2013–14. To date the NSS has come to structure HE work and public discourse on 'the student experience'. Yet research on SET instruments in general, and the NSS in particular, has focused on their statistical reliability and validity as research instruments (e.g. Richardson et al. 2007; Marsh 2007; Spooren et al. 2012). There have also been studies that have examined the relationship between SET ratings and students' learning (Galbraith et al. 2011; Richardson 2012) and their potential for improving teaching through consultations with lecturers (Penny & Coe 2004). There has been no empirical research or critical analysis of the social, cultural and political aspects of the production and consumption of survey results. In other words, SET results have been treated as reflections of reality and research has sought to determine their accuracy. The argument of this paper is that SET are social objects in their own right, perpetually structuring processes of production and consumption, with the potential to transform the way prospective and current students think about HE and to reconfigure the nature of HE work. After setting out the conceptual basis of this paper, I analyse the policy context of the NSS showing how it generates meaning and I suggest that the NSS has become a 'fact-totem' (de Santos 2009). I then set out findings from one institution that demonstrate how the NSS as fact-totem can be sustained at a micro-level in HE. Finally, I conclude that the NSS has come to represent 'the student experience' potentially to the detriment of students' educational experiences.
The NSS as 'fact-totem'
2.1 De Santos (2009: 467) argues that 'statistics are not simply better or worse mirrors of social reality'. The power and substance of their meaning is transformed when they are catapulted into different contexts. He defines the 'fact-totem' as 'one of the forms taken by public numbers when they become both the site of intense social attention and they are linked to basic identity narratives.' (p. 471). De Santos draws attention to the consumption – as opposed to the production of – statistics and to the need to study their reception in particular times and spaces (p. 470). He is concerned not with the different 'uses' of statistics but with 'how statistics are semiotic signs and cultural objects subject to change in varying contexts, genres, narratives, and media' (p. 471).2.2 As NSS results become integrated with other policy instruments – for example as performance targets for individual academics or league table positions for institutions – they acquire the power to confer or withhold professional esteem and bargaining power in the context of unequal power relations between managers and academics. The designation of NSS results as, not only meaning-giving and structuring, but more specifically, a fact-totem allows us to elaborate its features as a public statistic, with particular potentialities for reproduction and transformation. This point is elaborated in the following two sections.
2.3 My application of the term 'fact-totem' to NSS results differs in one respect from that of de Santos. Whereas he is concerned primarily with the consumption of statistics, I am considering processes of production as well as consumption. Whilst it is beyond the scope of this article to consider production comprehensively – for example I do not analyse in-depth the design or the policy development processes that led to the adoption of the NSS questionnaire – I do address students' agency in responding to the survey and indeed managers' and academics' agentic orientations in the cyclical production of NSS results.
The political life of the NSS
3.1 The NSS straddles the discourses of accountability and quality assurance; quality enhancement; and, employment market value and student choice. It functions as a fact-totem within these discourses in that it is the outward symbol of a substantive yet unseen truth, an imagined reality of 'the student experience'. As the forthcoming analysis will demonstrate, the convergence of these several discourses on a single SET instrument intensifies its social and political significance.3.2 As a policy instrument applied across the HE sector, NSS results are invested with meaning in relation to both quality assurance and quality enhancement. The Higher Education Funding Council for England (HEFCE) delegates its statutory responsibility to assess quality in English higher education (Further and Higher Education Act 1992), to a Quality Assurance Agency for Higher Education (QAA). The NSS has been an important part of the Quality Assurance Framework that the QAA operates. This positioning of the NSS renders it a taken-for granted and universal measure of 'the student experience' with a highly delimited construct of both 'student' and 'experience' (Sabri 2011). The NSS is promoted as a means of 'enhancement' by the Higher Education Academy (HEA) through an Institutional Working Group, an annual conference on enhancement through higher education surveys, and the collection of institutional case studies of enhancement (HEA 2010).
3.3 At an institutional level the discourses of quality assurance (Morley 2003) and enhancement (Middlehurst 1997) have long been internalised. 'Enhancement', a concept that has received much less critical attention than quality assurance, is worth attending to briefly: quality 'enhancement' has come into usage as the complement to quality assurance. The intention behind it seems to be to avoid an inference of deficit: the object of enhancement is already in good shape but can always benefit from further enhancement no matter how good it already is. This meaning-giving power is significant because it focuses the public gaze upon those aspects of HE work that are measured by the NSS and it renders other aspects less important. Interestingly, the 'enhancement' function was not thought to be a 'primary' purpose of the NSS at its inception (HEFCE 2002: 02/15 p. 3) but it has nevertheless come to be taken for granted, as is evident in many universities' accounts of their responses to the NSS, often couched in the format of 'You said…, we did…'.
3.4 Moreover the consumption of NSS results as a league table has added momentum to it as improvement in NSS scores is elevated into a shared institutional project, resulting in changes in the nature of work and relationships in HE. As Espeland et al. (2007) have shown in relation to Law Schools, the production of league table results in the redistribution of resources and the redefinition of work. In HE there is a proliferation of jobs with the title 'Director for (or of) the Student Experience' and in some 'Deputy Director of the Student Experience' (e.g. Middlesex University, Sussex University, Glasgow Caledonian University, University of Strathclyde, Bangor University, University of Manchester). Resources are therefore devoted within institutions to attending to 'the student experience' and directing 'the student experience' is defined by the annual cycle of administering, receiving, and responding to NSS data. We come back to the reconfiguration of HE work and relationships in the following section.
3.5 It is notable that while universities are only just beginning to see the NSS as having the function of informing student choice, this purpose was clear in the thinking of the working group that set it up at national policy level (HEFCE 2004: 04/22 p. 3). This aspiration has permeated institutional thinking because of the introduction of Key Information Sets (KIS) which include NSS results as the centrepiece in institutions' information about themselves for prospective students, student advisers, and the general public in order to inform the choices they make about their applications to HE institutions. Key Information Sets require that the NSS results are used to provide information about students' satisfaction with: standard of teaching, their course, the support and guidance they received, feedback on assessment, library facilities, IT facilities (HEFCE 2011: 12). Institutions have had these new arrangements for public information in place since the start of 2012/13.
3.6 The decision to use this information as part of the KIS rests on research conducted as part of a HEFCE-commissioned review (Davies et al. 2010) of students' public information needs. As argued in Sabri (2011) the review's conclusion that many students 'do not look for information even when they think it would be very useful' may have been a function of respondents formulating these priorities only in the course of their interaction with the researchers whose approach to determining students' priorities consisted in presenting them with a matrix of options drawn from interviews with university managers. Nevertheless, the report findings are used to support successive policy documents that cumulatively perpetuate the symbolic power of the NSS: first it is cited in support of Browne's (2010) assertion that student choice should be the engine of change in HE; and then in Provision of information about higher education (HEFCE 2011) the role of NSS results in informing student choice is cemented on the basis of this somewhat flawed research. In addition, it seems that no account is taken of ample evidence that students are resistant to acting as rational consumers (Reay et al. 2005), and in many cases do not make choices between multiple institutions and do not consult the NSS (Harding 2012).
3.7 Consequently a somewhat simplistic, arguably erroneous, conception of the workings of 'student choice' underlies the premise that it is a driver of change in HE, a conception that is not new but that is enshrined in the Browne review and in political discourse since. This review of HE funding in England argued that the stable results of the NSS, year on year, indicate that HE institutions will not improve 'the student experience' simply by receiving additional fee income: it is argued, they need competition to give them an incentive to improve students' experience, as measured by the NSS (Browne 2010: 23). That the stability of NSS results should be marshalled in support of such sweeping changes in the structure and funding of higher education is itself a testament to the status of the NSS as fact-totem.
3.8 A further claim that is often made that sustains the NSS as fact-totem is that its validity is impervious to variations of disciplinary contexts. The HEFCE commissioned review of the NSS, Ramsden et al. (2010: 36) is sceptical that there is a systematic effect in the survey questions regarding disciplines. The evidence for this scepticism is that a quite different pattern of subject area differences occurs in Australia where a similar instrument, the Course Experience Questionnaire is in use. This argument assumes an essentialist notion of disciplines rather than conceiving of them as socially constructed entities (Becher & Trowler 2001). For example, it is conceivable that a discipline's social positioning, which in turn structures the composition of its students' intake, and its relative allocation of resources within a university, varies in different national contexts.
3.9 Research into the NSS in Art and Design, a subject group that tends to have low NSS scores (Vaughan & Yorke 2010: 31), acknowledges the need to pursue inquiries into how students interpret NSS questions, how art and design subject areas compare to others, and the pedagogy of art and design. Nevertheless, Vaughan and Yorke are unequivocal that 'Claims that Art and Design is a special case, and not amenable to the kinds of questioning that are applied to other subject areas simply will not wash', and 'whilst the NSS has its weaknesses …it is not going to go away, and hence is something that the HE Arts and Design sector has to come to terms with' (p. 32). This discourse epitomises the way in which the NSS is wielded as fact-totem; reflecting a truth about institutions in relation to which they are required to be accountable. They also attribute undue weight to the NSS by implying it is an indicator of economic good: the relatively low NSS scores in art and design alongside these disciplines' significant contribution to the economy is, for them, a 'paradox'. There is no evidence to support the premise that the NSS is an indicator of economic good. Indeed the link between qualifications and access to employment in the creative and cultural sector is tenuous (Guile 2010). This example illustrates how the social meanings of NSS results can be independent of their validity.
3.10 Alongside the recursive annual cycle of the production and consumption of NSS results, there has been sporadic public opposition to the NSS and particularly the production of league tables based on NSS results and the use of this information to 'mis-sell' institutions. A group of academics writing in the Times Higher Education (Holmwood et al. 2011), calling for a Code of Practice against the misuse of NSS data in constructing rank orders by universities, drew a response from HEFCE saying that they will get their policy unit to consider the matter. The upshot was HEFCE's publication of NSS benchmarks which now control for certain student characteristics and subject mix (HEFCE 2011: 11/18). The aim of the benchmarks is to provide a more reliable basis for comparison between institutions, but their formulation is also a tacit acknowledgement of the limitations of such comparisons. Nevertheless, the press statements of higher education institutions themselves seem to persist with the use of raw data comparisons to construct their identity narratives.
3.11 Another example of public opposition to the NSS was that of a group of 12 academics from the University of Brighton who wrote to the Times Higher Education (30 September 2010) calling for a boycott of the NSS by 'self-respecting academics and students'. For them the NSS is 'statistically risible exercise in neo-liberal populism'. Their courses in Philosophy and Philosophy and History had come top in the NSS in 2010. This is significant because it undermines the view, implicit in policy discourse around the NSS, particularly the Browne review, of academics as self-interested 'knaves' (Le Grand 2003), who need to be subjected to market conditions in order to attend to the needs of their students.
3.12 In the current policy discourse, there is a lack of precise evidence for the utility of the NSS results for quality enhancement and public information. The NSS is indeed here to stay, in its current form at least until 2015, and possibly beyond (Ramsden et al. 2010: recommendations 6 and 17). However, that is all the more reason to respond to it critically. Developing the level of debate about NSS results involves considering the extent to which they embody a 'student voice'; reflecting on institutional agency in relation to the NSS data and the pattern of meanings that is generated around them; and situating both the NSS and institutional responses within their social, political and historical contexts. This requires empirical research that goes beyond treating the NSS as a research instrument, and toward investigating its structuring power as a social object.
Methodology
4.1 The data that I draw upon come from two studies conducted in a large UK university. No claims are made for generalising from these to UK higher education. The findings are indicative of the potential fruitfulness of treating SET in general, and NSS in particular, as social objects upon which critical sociological method and theory can be brought to bear. The data-sets were collected over three years. The relationship between them is not longitudinal but rather one of progressive focusing. An initial exploratory discourse analysis of NSS free text responses threw up questions about how students made sense of the NSS questionnaire and how they positioned themselves in relation to it, which led to the focus groups. As the findings from both the open comments and focus groups were discussed within the institution, research questions began to emerge that addressed the processes of consumption of NSS results by students, tutors and managers. Thus a second study was conceived that explored how NSS results were interpreted, discussed, and utilised in day-to-day interactions between staff, and among staff and students.4.2 The free text comments from the 2008–9 NSS results were generated in response to three questions at the end of the NSS questionnaire asking students to highlight positive and negative aspects of their course, and to suggest how their University could improve the students' final year. I analysed their textual content and context, for example, attending to: who students seemed to be addressing in their comments; how they orientate themselves to the issues that they comment upon; the causal relationships they perceive; and their expectations and assumptions. The four focus groups with students completing the survey in 2010 ranged in size from 4 students to 20 with a total of 40 students taking part. They came from four courses from cognate disciplines in the arts but ranged widely in terms of their history of NSS scores and student intake. The focus group questions addressed: students' perceptions of the survey questions' terminology and relevance to their context and shared meanings within their course; students' understanding of the political significance of the NSS; students' motivation in responding to it; and their perceptions of the timing of its administration. In addition, both the free text responses and the focus group data were interrogated for issues that lie outside the purview of the closed questions. This analysis develops our understanding of what the NSS quantitative results do not tell us about students' experience and also what they 'screen out' of students' feedback through the use of particular systems and categories.
4.3 In the second study I convened four staff-student working groups from four courses during 2011 to analyse NSS data from the previous two years (2008/9 and 2009/10). These four groups comprised six academics and fifteen students in total. I also interviewed four senior managers with responsibility for those four courses. These four courses were purposive samples which represented a range of disciplines (social sciences, arts and humanities) and experiences of gaining relatively high, low and fluctuating NSS scores. The purpose of this study was to explore patterns of communication and relationships of accountability that surround the NSS, and to build a preliminary understanding of the interplay between students, tutors and managers in relation to the NSS. The first study could be said to relate to the production of NSS data while the second relates primarily to their consumption, though of course these processes are barely extricable and mutually dependent.
Findings: The production and consumption of the NSS
5.1 I use findings from both studies to demonstrate the ways in which the NSS results have become a fact-totem: by attracting intense social attention and forming identity narratives within one university. I then describe the relative triviality of these meanings in the assumptive worlds of students, indicating the spatial and temporal boundaries of the NSS results' symbolic meaning. Finally, I explore the ways in which the elevation of the NSS as a fact-totem has structured conceptualisations of 'the student experience'.Intense social attention: why the NSS can 'matter' so much
5.2 Sauder and Espeland (2009: 74) suggest that workers in law schools internalised league table rankings by constructing them 'as sources of anxiety, as objects to resist, and as pressure that becomes, for some, peculiarly seductive'. These constructions were very much in evidence in relation to the NSS, though the scope for resistance was heavily circumscribed.
5.3 The anticipation of NSS results was like 'the sword of Damocles is always hanging over our heads', as one lecturer put it. This lecturer's course had regularly attained what were regarded as very good NSS results and yet the annual cycle of their production and consumption elicited feelings of dread and anxiety. For some, reading NSS results was a mysterious experience: they described how often their NSS statistics contradicted other feedback from students that they collected themselves. There was a sense of being judged on the basis of NSS data when it was not clear to them what made these data more valid than other sources of feedback. In their view the NSS had a highly charged and inflated importance that sometimes drowned out other sources of feedback. It also occluded issues that were raised in other sources of feedback such as institutional policy regarding timetabling, contact hours and other resources. In contexts such as departmental and institutional meetings, such issues were marginalised if they did not have an easily identifiable link to NSS scores.
5.4 On the other hand lecturers were reluctant to discuss the NSS with students for fear of being perceived as attempting to manipulate responses. Discussing the results with subsequent cohorts of students – in the context of the staff-student working groups was also uncomfortable for some. Lecturers were often managing a tension between their professional values as educators, and the emphasis in the NSS on 'satisfaction' with its implied construction of students as customers. While lecturers, in the main, felt supported by their line managers, such support was conditional upon their compliance in a shared project to improve NSS scores, and they anticipated increased pressure to do so as the new fees regime became apparent.
5.5 All four managers' narratives about the NSS referred to an emotive impact of the NSS on lecturers:
People take it personally. It's not just cold hard data but an emotional badge of worth from the student and a potential stick to be beaten with by the institution. [Manager 1]In the immediate aftermath of the publication of results one manager saw his role, as nothing to do with 'the actual results' which 'comes later' but rather in dealing with the 'terrible weight' and emotion that comes with receiving the NSS results. In the view of this manager feelings of powerlessness and futility among lecturers whose courses scored badly were compounded by the fact that they could not go back and address the students who produced the NSS results:
The NSS feels old, out of date and abstract. Our own [module] evaluation can be very detailed, timely and relevant. …[the NSS] can't be dealt with and deleted. It's kept in your email as an artefact that can't be engaged with.' [Manager 1]This sense of the NSS results being incapable of deletion and persistently present was echoed in frequent references among both lecturers and managers to the appearance and reappearance of NSS results as an agenda item in meeting after meeting throughout the academic year. Thus the NSS pervades the consciousness of individuals and the discursive spaces within and around formal meetings held within departments and at all levels in the institution.
5.6 The 'peculiar seduction' of the NSS as fact-totem was evident in the managers' narratives which, despite awareness of its limitations, situated the NSS as part of a continuous cycle of improvement, and took for granted the validity of NSS league tables that compared institutions with each other, and courses with each other. The NSS served as a 'lever' to induce change in low-performing (in NSS terms) courses. The allure of the NSS – for some managers – lay in its capacity to facilitate social comparison: both externally between institutions but also internally between courses. Nonetheless, one manager and several lecturers believed that NSS results 'created work with little change'. An environment was created in which 'lots of people are trying to come up with ideas quickly' [Manager 3]. The consequence was that as a result work is generated and 'we are all shattered from doing it and no-one is any happier' [Manager 3].
5.7 Some managers felt that the NSS results – because they generalised to the whole course and lacked qualitative qualification – lacked a basis for response 'there's no way of knowing what lies behind those percentages' [Manger 2] and the implication of this view is that the notion that the institution 'responded' to student feedback in the NSS was a sham. The perception of dissonance between the NSS results as fact-totem and an imagined underlying 'actual' student experience was expressed by other managers who believed that that NSS results could be improved through better communication with students, through the management of students' perceptions and expectations. For example some lecturers and managers argued that sometimes students' misunderstandings – for example about the allocation of resources, the relevance of a particular part of their course or the use of some terminology within the NSS – were often at the root of negative feedback. So NSS results' mediation of 'the student experience' in this instance is constructed as inauthentic or illegitimate.
Central and peripheral
5.8 The NSS questionnaire is administered between January and April of students' final year. For many students, their final year work is their central and most overwhelming concern, whereas the NSS is peripheral and weakly held in their consciousness. Many of the students who attended the four staff-student working groups (held during the period of the NSS' administration) were hard-pushed to recall what the acronym – NSS – stood for. A few were resentful of the way in which the questionnaire was administered: repeatedly 'chasing' them by email, letter and finally by telephone. Some reported receiving what they regarded as intrusive late evening calls by Ipsos-MORI staff inviting them to complete the questionnaire over the telephone. Others recounted hearing of such experiences from friends. The veracity of these claims or the proportion of students with such experiences is impossible to ascertain from this study but such stories are now part of students' narratives about what it means to participate in the NSS.
5.9 On hearing how little prior knowledge her students had of the NSS, one lecturer commented:
We know a lot about the NSS. It really is a measure of us, as [academics].In marked contrast to the world of the students, for the lecturers the NSS is central to their day-to-day working lives, especially those with overall responsibility for particular courses, who feel personally accountable. As another lecturer put it, 'It haunts us'. So students' encounter with the NSS was a fleeting, one-off interruption in their final year. Their consciousness of it is faint and it is a tenuous signifier of their experiences as students. Their lecturers operate in an environment in which the NSS is not only regarded as the sole objective measure of 'the student experience' but also of their own value as teachers and academics. Its presence is persistent throughout the academic year and recurs in an annual cycle that infiltrates and circumscribes the routines of their professional practice. The contrast between the ways in which students and academics experience the NSS is stark and adds to the sense of disproportion in its repetitive and relentless consumption within institutions.
Structuring 'the student experience'
5.10 Within the focus groups it was evident that some students saw their participation in the NSS as a contribution to a nationwide 'student voice' directed at government and arguing for better funding of higher education. There was also some awareness of the marketisation of higher education and the increasing visibility of the operation of universities' business functions, for example through (re)branding:
I think the [university] as a whole is being stripped of its character… With all the money it's difficult to see where it is going. Money should be spent on students not marketing the [university] as a corporate enterprise. [Focus group 1]Some were also suspicious – perhaps wisely – that their responses would become 'ammunition' in national and institutional contexts for purposes that they did not support. For example, there was a fairly common suspicion that poor scores would be used by university management against lecturers:
S2: The problems are to do with a largely university structure….Hardly any of the problems are to do with the tutors. If our tutors could do exactly what they wanted to do, it could be a much better course.The significance of these comments – both in relation to marketisation and the politics of the NSS – is that they indicate that some students' frame of reference extends beyond their courses. They do not locate the underlying causes of their experiences to course-level factors, but to institutional policy situated within a national context. It is worth noting in relation to this point that the focus groups took place in 2011 before the current re-structuring of student financing took place. The NSS occludes aspects of students' experiences by defining, in line with the instrument on which it is based, the Course Experience Questionnaire (Ramsden 1991), the course as the bounded unit of student experience. Consequently, institutional structures, such as resource allocation and government funding, that interact with course experience can become peripheral.
S8: You don't know how much change can be made on the back of that because you don't know who is sitting higher and what is ammunition for the people who hold the statistics, not the teachers themselves.
S5: On the survey all it is is about the course itself. If we get a really negative mark off the back of this survey, I don't want that to reflect badly on our tutors, but I know the university's going to use that as an excuse to go on our teachers-
[Focus group 4]
5.11 Even within its own framework of reference, the course experience, the NSS does not address some significant issues which emerged prominently in students' open comments, in focus groups and staff-student working groups: relationships with peers and curriculum design and content. The issues raised in relation to peers pertained to: competition, networking, the centrality of peers' support, the interaction between, and among, home and international students, and perceptions of differential resources being allocated to different departments. Perhaps it is unsurprising that these issues are not addressed in the ratings section of the NSS because questions about students' engagement with each other would jar with the notion of 'satisfaction' with a provided 'experience'.
5.12 With respect to the curriculum there was talk not of the 'processes' of learning and teaching but of what material was included and excluded from their courses and how it related to their initial motivations for choosing a course and their aspirations for the future. For example, a student talked about how much she appreciated that her course had addressed a 'unique set of concerns and processes' that were 'idiosyncratic to the institution'. Others wished they had gained a greater understanding of the rationale for curricular decisions (for example regarding the balance between technical/practical and theoretical components or the degree to which they were able to specialise in their chosen areas of interest). The only question in the NSS that refers to curricular matters is an item that occurs within a scale on Teaching, 'The course is intellectually stimulating'. Students from three of the focus groups felt that this was narrow in two senses: first the use of 'intellectually' excluded other aspects of their courses such as creative and professional development. Second, they took issue with the assumption that it was the responsibility of the course to 'stimulate' them and talked about the need for students to 'challenge themselves'. This suggests that the framing of the NSS questions allows little scope for students to 'rate' what is important to them: namely the development of their identity as scholars, professionals or practitioners in particular fields. By casting students as passive receivers the NSS questionnaire rides roughshod over what can be a turbulent and intellectually, emotionally and creatively challenging time. These aspects of students' participation in HE are rendered meaningless and work in relation to them is left unsanctioned through the pervasive structure of the NSS.
5.13 The NSS questionnaire asks students to give their 'current' views of their course as a whole (emphasis in original). This presents students with a set of problems related to generalisation and overview. As one student put it in a focus group discussion:
…the questions are too sweeping. They are like, 'how do you feel?' It's like, 'in the second year, I felt great. Now in the third year, I think the third year's a massive problem, so you see what I mean? So you'd write down 'not good' but in fact you don't really think that. You have to conform to the question that you're asked, but it isn't giving a true representation of how you feel.' [Focus group 3]Having to express a differentiated experience in aggregate form was frustrating. There was also a view (in both working groups and focus groups) that responses tended to be coloured by students' most recent experiences: 'You're probably answering in the now'. Memorable events from the start of their course receded into insignificance by the time they came to respond to the NSS. There were similar problems in generalising about all the lecturers that students had encountered. One course group recounted how their negative view of one lecturer had coloured their responses to the questionnaire as a whole particularly because they felt their complaints had not been adequately heard and they saw it as 'the university's fault that such a person is teaching' even though 'you can't say overall'. There were also indications that the emotional context at the time of responding had overwhelming significance as the following quotations from two students attest:
So for me it was that time I was filling it in, I had just handed in my dissertation and I was really stressed, seriously… [Focus group 1]Interviews with university managers revealed that there had been proposals to divert resources to the third year and to manipulate the timing of announcing assessment results in the third year in order to avoid negatively influencing students' state of mind. This behaviour is consistent with Espeland and Sauder's (2007) finding on the proliferation of gaming strategies in law schools as a result of league tables.
I did take in my first and second year into consideration but because I was so stressed in doing my [project] I think it's mainly weighted by that. (Focus group 2)
5.14 NSS questions have become articulated with what it means to be a student. As one student put it, 'It [responding to the NSS] gave me an idea of what I should be getting'. It is perhaps the most obvious effect of the administration of any questionnaire that it sustains particular systems, categories, and meaning structures. The values and assumptions that underlie the questionnaire enter into the student's frame of reference for her experience of higher education. The meaning that is being reproduced here pertains not only to 'what' the student should be getting but also to the very idea that the student's experience of HE can be encapsulated as 'should be getting'. This language of the customer, passively receiving a higher education 'provision' is to be contrasted with more complex conceptions of 'experience' as embedded in interactions and situated in continuity – or discontinuity – between multiple experiences (Dewey 1938/1997: 33–50). In their free text responses students often did position themselves as consumers. For example:
When I started the course fee was £3000, second year was £3075, third £3175. No-one has explained why and where our fees are going as the quality of the course did not improve at all but got worse…even basic equipment is missing… [NSS open comment]Such value-for-money comments were always associated with complaints about provision – either material or more typically in relation to contact time with academic staff. Positive comments, however, were never couched in relation to value for money: no student ever said, 'My course is really good value, what a bargain!' The function of the discourse of the consumer in this context is that it provides a legitimate ground for complaint. Students can be easily articulate about the investment they have made in terms of fees, and arguably, it is harder to be articulate about other kinds of investment such as the identity work that is often part of entering and persisting in higher education, and the effort of adapting personal circumstances in order to participate in HE. Indeed there is little space (or time, if responding over the telephone) to go into such issues. They construct themselves as consumers because the NSS questions emphasise 'satisfaction' but perhaps also because there is a dearth of other institutional discursive spaces that allow them to argue that they are being treated in ways that they wish to be other than they are. It remains to be seen over the coming years how much more prevalent this discourse will become. It is important to note, however, that a marketised system need not necessarily go hand-in-hand with a survey instrument that constructs students as consumers.
Conclusion
6.1 The foregoing findings are drawn from a single institution, and no particular claims are being made for its representativeness of a wider range of institutions. Nevertheless, it is possible to discern from the public discourse that surrounds the NSS within universities and in the press that the issues raised in this single-institution study are not unique. At the same time it is possible to speculate that the fact-totemic significance of the NSS within institutions will vary in relation to the significance of other disciplining structures such as the degree of insulation from or vulnerability to league table positioning, and reliance on non-teaching funding streams (e.g. from research, industry, endowment and other private sources). Furthermore, an institution's position in the NSS relative to others itself influences the parameters of its discursive space: institutions that perform well in the NSS perceive they have nothing to gain from questioning it, those that perform less well are already constructed as having suspect reasons for doing so.6.2 The HE sector as a whole is interacting with a policy environment that demands 'objectivity' rather than expert judgement (Porter 1995: 89) and a compliant response that screens out the interests of its academics (Sabri 2010). Thus the NSS is constructed as fixed and incontrovertible: it 'will not go away' (Vaughan & Yorke 2010). Consequently, institutional managers feel obliged (probably more so in some institutions than in others) to use the NSS in mediating relationships with academic staff. Since demonstrating value-for-money seems paramount, there is little incentive to exercise one's political imagination to question its validity within institutions as well as publicly even if there are grounds to do so.
6.3 That the NSS has triumphed as a description of the phenomenon we have come to know as 'the student experience' is obvious to all who work in HE. The mechanisms through which this has happened are not unusual in the social and political life of statistics:
Some descriptions triumph over others through rather technical and practical matters arising from experimentation and invention; [and]… once they emerge descriptions will survive if it is possible to do things with them and use them to produce effects. [Osborne & Rose 1999: 373]What is most interesting about the NSS is the sheer range of contexts into which it is catapulted as a fact-totem and the range of purposes that it is expected to fulfil: quality assurance, quality enhancement and public information to support student choice. At a national policy level and within institutions, its fact-totemic significance is evident in the ways in which ratings become intrinsic to the construction of 'problems', and to the formulation of strategies and targets. Managers use survey results to 'identify potential problems in the student experience, and to act on them quickly' (Ramsden et al. 2010: 3). For those who teach, in particular, but also for some who manage, publication of the NSS results can occasion a series of performative episodes that close down the possibilities for critical engagement with student feedback as the NSS is cast as an indisputable expression of 'the student experience' and critique is interpreted as 'blaming the student'.
6.4 What also seems peculiar to the NSS is that there is a cyclical relationship between the processes of its production and consumption. Lecturers are required to encourage students to complete the survey, then to take part in a 'constructive' discussion of the results before being subjected to personal and professional judgement by these results. Professional identity narratives are co-opted into the production of the NSS when lecturers perform their exhortations to students to fill in the questionnaire. As the results are produced it becomes a form of public measurement of their competence to 'provide' a 'good student experience'.
6.5 At the same time the processes of production and consumption are prised apart in the agency of students. Students are barely aware of its significance as they 'produce' the results when the NSS is administered during their third (and usually final) year of study. It is arguable that the survey's imposition of generalisation, requiring comments to reflect 'the whole course experience' is an intrinsic limitation of any survey instrument. However, this limitation is exacerbated by the timing of its administration, before the students have even completed their course. The UK is unique in administering its SET instrument during students' study rather than after its conclusion when reflection upon the whole experience might be more feasible. As prospective students, it is still too early to judge what – indeed whether – the NSS will signify within the KIS.
6.6 The 'things' institutions 'can do' with NSS data include: analysis of quantitative ratings and students 'open comments', and making records at multiple levels (university, school, department, course etc) that record results, response, action, comparison with past years, and with other universities and comparable courses. These records and associated innumerable rankings add layers of meaning and emotion to the way that staff members experience the NSS. Sometimes highly charged discussions take place about culpability or credit, conflicting interpretations, and perceptions of validity. All of this can take place independently of any technical understanding of the statistics (de Santos 2009).
6.7 Despite appearing to operate at cross-purposes in relation to the NSS, students, lecturers and managers arguably have common interests insofar as they all see a dissonance between what they value about higher education, and the substantive focus of the NSS questions. Yet, there does not seem to be an appetite for questioning the NSS as 'fact' or denting its status as 'totem'. Rather than acknowledge their individual and collective agency in perpetuating its social and political meaning, there is a sense of powerlessness. The prevailing perception is that the political environment dictates compliance and that the only option available is to work to improve NSS scores. However, it should be said that none of the managers or tutors perceived themselves to be exclusively concerned with NSS results but rather believed that they were related to or indicative of wider issues that did merit their attention. At the same time as speaking eloquently of the limitations of NSS data and the detrimental impact of their consumption, managers, and to a lesser extent lecturers, continued to have faith that if these underlying problems were tackled, they would see the results of their efforts in better NSS results. This is reminiscent of the 'peculiar seduction' observed by Espeland and Sauder (2009: 74) with respect to law school league tables.
6.8 It would be possible to do different things and have different effects using an alternative description of 'the student experience'. The American NSSE, and the more recent Australian equivalent, use such concepts as engagement, academic and social integration, time on task, student involvement and quality of effort (Kuh 2009). It is significant that the NSSE's underpinning concepts are not compatible with a view of students as customers, and this in a national context where, arguably, higher education is more marketised than it has been, thus far, in the UK. The NSSE does not have the mandatory status of the NSS, and institutions that use it are not obliged to publish their results. Whilst the NSSE has its critics (e.g. Porter 2011; Hagel et al. 2012), as an example it suggests that things can be other than they are. The construction of students as customers and the enshrinement of the NSS as a measure of their satisfaction is not a given, even in the post-Browne world of UK higher education. Building alternative sources of data about students' participation in higher education may go some way to reclaiming the appropriated ground of communication within and about it.
Acknowledgements
I would like to thank members of the critical higher education research study group at KCL for an invaluable discussion of an earlier draft of this paper and three anonymous reviewers of this journal for their insightful comments. I am also indebted to the many research participants who took part in the two research studies on which this paper draws, and to the university leaders who commissioned the research and engaged wholeheartedly in its findings.
References
BECHER, T. & Trowler, P. (2001) Academic tribes and territories: Intellectual enquiry and the cultures of disciplines (2nd edition). Buckingham: Open University Press/SRHE.BROWNE, J. (2010) Securing a Sustainable Future for Higher Education: An Independent Review of Higher Education Funding and Student Finance in England, October. <http://hereview.independent.gov.uk/hereview/report/>
DAVIES, P., Slack, K., Hughes, A. & Mangan, J. (2010) Understanding the Information Needs of Users of Public Information About Higher Education, Report to HEFCE, with Oakleigh Consulting Limited, May. Retrieved April 27, 2011, http://www.hefce.ac.uk/pubs/rdreports/2010/rd12_10/rd12_10b.pdf.
Department for Business Innovation and Skills (2011a) Higher education: Students at the heart of the system. Retrieved 15 December, 2011, from http://discuss.bis.gov.uk/hereform/white-paper/
Department for Business, Innovation and Skills (BIS) (2011b) Higher Education: Students at the Heart of the System Cm 8122. London: HMSO.
DE SANTOS, M. (2009) 'Fact-totems and the statistical imagination: The public life of a statistic in Argentina 2001', Sociological Theory, 27(4) p. 466–489.
DEWEY, J. (1938/1997). Experience and Education. New York: Simon & Schuster.
ESPELAND, W.N. & M. Sauder (2007) 'Rankings and reactivity: How public measures recreate social worlds', American Journal of Sociology, 113(1) p. 1–40.
GALBRAITH, C., Merrill, G. & Kline, D. (2011) 'Are student evaluations of teaching effectiveness valid for measuring student learning outcomes in business related classes? A neural network and Bayesian analyses', Research in Higher Education, 52(6) p. 1–22
GUILE, D.J. (2010) 'Learning to work in the creative and cultural sector: New spaces, pedagogies and expertise', Journal of Education Policy, 25(4) p. 465–484.
HAGEL, P., Carr, R. & Devlin, M. (2012) 'Conceptualising and measuring student engagement through the Australian Survey of Student Engagement (AUSSE): a critique', Assessment and Evaluation in Higher Education, 37(4) p. 475–486.
HARDING, J. (2012) 'Choice and information in the public sector: a higher education case study', Social Policy and Society, 11 p. 171–182.
HIGHER EDUCATION ACADEMY (HEA) (2010) Using the National Student Survey for enhancement http://www.heacademy.ac.uk/resources/detail/ipp/Issue5_NSS Accessed 1 August 2010.
HEFCE (2002) 02/15 Information on Quality and Standards for Higher Education. Bristol: HEFCE.
HEFCE (2004) 04/22 National Student Survey 2005: Consultation. Bristol: HEFCE.
HIGHER EDUCATION FUNDING COUNCIL FOR ENGLAND (HEFCE) (2011) 2011/18 Statement of Policy Provision of information about higher education: Outcomes of consultation and next steps. Bristol: HEFCE.
Hickey, T., Devenney, M., Noakes, L. et al., University of Brighton (2010) NSS Result: Unsatisfactory, THE, 30 September 2010. <http://www.timeshighereducation.co.uk/story.asp?storyCode=413677§ioncode=26>
HOLMWOOD et al. (2011) Clarity for student buyers, THE, 10 February 2011. <http://www.timeshighereducation.co.uk/story.asp?sectioncode=26&storycode=415146>
KUH, G. (2009) 'the national survey of student engagement: conceptual and empirical foundations', New Directions for Institutional Research, 141(Spring) p. 5–20.
LE GRAND, J. (2003) Motivation, Agency, and Public Policy: Of Knights and Knaves, Pawns and Queens. Oxford: Oxford University Press.
MARSH, H.W. (2007) 'Students' evaluations of university teaching: A multidimensional perspective', in Perry, R.P. & Smart, J.C. (Eds.), The Scholarship of Teaching and Learning in Higher Education: An Evidence-Based Perspective (p. 319–384). New York: Springer.
MIDDLEHURST, R. (1997) 'Enhancing quality', in Coffield, F. & Williamson, B. (Eds.), Repositioning Higher Education. Buckingham: SRHE/ Open University Press.
MORLEY, L. (2003) Quality and Power in Higher Education. Buckingham: SRHE/Open University Press.
National Student Survey. Available at: http://www.thestudentsurvey.com/faqs/faqs_4.html (accessed 30 January 2012)
National Student Survey of Student Engagement. Available at: http://nsse.iub.edu/pdf/survey_instruments/2011/NSSE2011_US_English_Paper.pdf (accessed 30 January 2012)
OSBORNE, T. & Rose, N. (1999) 'Do the social sciences create phenomena?: the example of public opinion research,' British Journal of Sociology, 50(3) p. 367–396.
PENNY, A.R. & Coe, R. (2004) 'Effectiveness of consultation on student ratings feedback: a meta-analysis', Review of Educational Research, 74(2) p. 215–253.
PORTER, S.R. (2011) 'Do college student surveys have any validity', The Review of Higher Education, 35(1) p. 45–76.
PORTER, T.M. (1995) Trust in Numbers: The Pursuit of Objectivity in Science and Public Life. Princeton, NJ: Princeton University Press.
RAMSDEN, P. (1991) 'A performance indicator of teaching quality in higher education: the course experience questionnaire', Studies in Higher Education, 16(2) p. 129–150.
RAMSDEN, P., Batchelor, D., Peacock, A., Temple, P. & Watson, D. (2010) Enhancing and Developing the National Student Survey. London: Centre for Higher Education Studies, Institute of Education. <http://www.hefce.ac.uk/pubs/rdreports/2010/rd12_10/>
REAY, D., David, M.E. & Ball, S. (2005) Degrees of Choice: Social Class, Race and Gender in Higher Education. Stoke on Trent: Trentham Books.
RICHARDSON, J.T.E., Slater, J. et al. (2007) 'The National Student Survey: Development, findings and implications', Studies in Higher Education, 32(5) p. 557–580.
RICHARDSON, J.T.E. (2012) 'The role of response biases in the relationship between students' perceptions of their courses and their approaches to studying in higher education', British Educational Research Journal, 38(3) p. 339–418
SABRI, D. (2010) 'Absence of the academic from higher education policy', Journal of Education Policy, 25(2) p. 191–205.
SABRI, D. (2011) 'What's wrong with "the student experience"?', Discourse: Studies in the Cultural Politics of Education, 32(5) p. 657–667.
SPOOREN, P., Mortelmans, D. & Thijssen, P. (2012) '"Content" versus "style": acquiescence in student evaluation of teaching?', British Educational Research Journal, 38(1) p. 3–21.
SAUDER, M. & Espeland, W.N. (2009) 'The discipline of rankings: tight coupling and organizational change', American Sociological Review, 74(1) p. 63–82.
VAUGHAN, D. & Yorke, M. (2010) 'I can't believe it's not better': The Paradox of NSS Scores for Art & Design. ADM-HEA Subject Centre of the Higher Education Academy and the HEAD Trust. <http://www.adm.heacademy.ac.uk/projects/adm-hea-projects/national-student-survey-nss-project>