Www.research.manchester.ac.uk



Linking rights, needs and fairness in high-stakes assessments and examinationsSummaryThis paper presents a novel explanation for the continued absence of a children’s rights strategy within high-stakes educational assessment with reference to the competing purposes of high-stakes assessments and group-based constructions of fairness in assessment. We provide an original critique of group-based perspectives on the validity of assessment accommodations whic supports an individual perspective on fair educational assessment. From this, the (almost forgotten) concept of ‘student assessment needs’ is (re-)introduced as a central axiom, to be constructed through feedback from, and dialogue with, students about their experience of high-stakes assessments, giving primacy to their purpose as an attainment demonstration opportunity for the student. To promote a new movement towards student participation in educational assessment processes and reforms, we propose ‘rights respecting assessment’, complementary to UNICEF’s ‘Rights Respecting Schools’ initiative, in which regular system-wide student representation would contribute to the development of high-stakes examination systems. In recognition of this aspiration, this paper is co-authored with a recent school-leaving age examination candidate. Key words: educational assessment, children’s rights, fairness, assessment needs, high-stakes. The promotion of children’s rights within education Children’s rights across many sectors in many countries have been promoted by the four guiding principles of the UNCRC, which support non-discrimination, acting in the child’s best interests, the child’s right to survival and development, and respecting the views of the child (Britto & Ulkuer, 2012; Children’s Rights Alliance for England, 2016; Department for Children Schools and Families (DCSF), 2009; Lee & Svevo-Cianci, 2009; UK Children’s Commissioners, 2015). Particularly evident within educational legislation and policy guidance is the development, from the principle of respecting children’s views, of children’s participation within decision-making at school and individual levels, exemplified in the establishment of school councils and other student forums, student governors and person-centred planning (DCSF, 2010; DfE, 2014b; Hall, 2017; HM Government, 2002; White & Rae, 2016). Participatory practice has been supported by a body of research exploring and evaluating methods for eliciting and including student views, or ‘voice’, on a range of issues affecting their educational experiences, extending to children acting as researchers themselves within school settings (e.g. Adderley et al., 2015; Burton, Smith & Woods, 2010; Fayette & Bond, 2017; Groundwater-Smith & Mockler, 2016; Hall, 2017; Harding & Atkinson, 2009). Hall (2017) provided a critical perspective on policy and strategies for student engagement in education, arguing that these are shaped more by institutional or managerial requirements and the school improvement agenda. She challenges educationists to examine the role of student voice in the active and democratic transformation of educational systems, in which students are positioned not as consumers of education, but as co-producers, through dialogue, of relevant knowledge, and active collaborators in the development of educational institutions, systems and processes: ‘Voice can exist entirely by itself; autonomous and something to be heard, or not. Talk, on the other hand, is by implication a bi-directional activity’ (p.188). However, whilst conceptualisation of children’s rights within education generally has embedded significantly since the inception of the UNCRC in 1989, consideration of children’s rights specifically in relation to educational assessment has been relatively neglected (cf. Barrance & Elwood, 2018; Elwood & Lundy, 2010; Philips, 1994). It is perhaps relevant that whilst the UK government’s office for examinations regulation has a focus upon maintaining standards and confidence in public qualifications, examinations and assessments, there is no direct evidence that student participation (other than as examination candidates) features within that remit (Ofqual, 2017a). Yet General Comment No.1 on Article 29(1) of the UNCRC (UN, 2001) identifies the importance of educational processes being child-centred, relevant to later life, and promoting the child’s best interests, all of which may have particular interpretations within the context of educational assessment. Elwood and Lundy (2010) speculated upon the reasons for the disconnection between the children’s rights agenda and educational assessment: ‘Whether this is due to active resistance, general apathy or just lack of awareness is unclear’ (p.345). In this paper, we give further consideration to the observed disconnection, highlighting two specific factors, namely: the competing, imperfectly reconcilable, purposes of high-stakes educational assessment; and group-based, rather than individually-based, constructions of notions of ‘fairness’ in examinations, relating to a strong concern for standardisation of examination conditions. Purposes and constraints of high-stakes educational assessmentReflection upon the structure or processes of high-stakes educational assessments, particularly those at school-leaving age, requires an appreciation of the purposes for which they were developed and have been maintained, which are: accountability of schools and teachers; preparation of students for future employment and educational opportunities; and student selection. Since the inception of school examinations, they have consistently been used as an accountability measure for the quality of teaching and standards of learning in schools, and their continuous improvements (Department for Education, 2014; Mathews, 1985; Stobart, 2008). Closely related is a concern with the integrity of the methods and processes by which students are assessed, without which accountability cannot be maintained (Department for Education, 2014; Marshall, 2017; Tattersall, 1994). However, the predominant use of student examination outcomes as a measure for school accountability has long since attracted an ambivalent response from educationists: My argument is that while high-stakes accountability testing, with its narrow measures and emphasis on rapid improvement, can provide short-term benefits, it rapidly degrades and becomes counterproductive’ (Stobart, 2008, p.116) (cf. also Herbert, 1889). In a ‘performative’ culture, schools are currently experiencing increased pressure to ensure improved student examination attainment (Flitcroft, Woods & Putwain, 2017; Williams et al., 2014), and detrimental psychological consequences of the pressured school examinations regime have been highlighted as a significant mental health concern for young people (Benn, 2017; Marshall, 2017; Rodway et al., 2016).At the same time, the case for linking schools’ accountability to student examination outcomes has been supported by the fact that such outcomes are uncontestably instrumental for some future employment opportunities (Denscombe, 2000; Department for Education, 2014). However, in a landmark critical examination of assessment and testing, Stobart (2008) identified a tension between this and the purpose of school examinations in selection by merit (cf. also Mathews, 1985; Wolf, 1995), which arguably maintains an observed gap between many occupational skillsets and the school curriculum or examination syllabi. Indeed, Stobart (2008) noted that historically examinations were primarily considered to be an ‘ordeal’, or a ‘trial’, and the extent to which this is still considered to be the case might be relevant to the selective purpose of educational assessments. Tension between the certification and selection functions of school examinations raises a perennial question about how finely graded the outcomes of school examinations should be, and what different consequences arise for education providers and for students as a result of different grading structures (Herbert, 1889; Stobart, 2008). Different programmes commonly adopt different outcome structures, though the rationale for this in relation to the intended selective or certification functions of the award often not made explicit. For example, it is not clear why England’s school leaving award, the GCSE, divides its cohort across ten possible grade outcomes whereas English university undergraduate degree programmes use only three broad categories of award, with almost half of awards being within one category (Higher Education Statistics Agency, 2017; Ofqual, 2017c). It has been acknowledged that the varied purposes of educational assessment to hold schools accountable, to prepare students for further education and employment, and to act as a meritocratic selection device, may not be fully reconcilable (Mathews, 1985; Stobart, 2008). With no ideal solution envisaged, national political philosophies relating to school accountability are seen to have a significant, and increasing, influence upon how examination systems are developed to balance school, teacher and student interests (e.g. Isaacs, 2010; Marshall, 2017; Tattersall, 1994; Waters, 2015; Wolf, 2009). Within this, it has also been recognised that, whatever the broad purpose of an assessment system, its proximal purpose for the individual student is as an attainment demonstration opportunity linked to a programme of study (Peacey & Peacey, 2007; Stobart, 2008; Waterfield & West, 2010):‘The purpose of an assessment is to determine the student’s achievement or skills. Assessments must be rigorous regarding standards so that all students are genuinely tested against an academic benchmark. However, they must be flexible enough to allow all students an equal opportunity to demonstrate their achievement. In all cases this means being very clear about what is being assessed so that modifications can be made without compromising standards’ (Foulkes, 2003, cited in Peacey & Peacey, 2007, p.3).However, the competing purposes of school examinations are also subject to constraints. Mathews (1985) made the point that school examination systems are replicated by those in power within government, examination regulators, and awarding organisations. By personal experience and professional background, those leading and managing aspects of the examinations system have been subjected to, and usually successful within, the current socially constructed reality of the existing examinations systems, and so the development of such systems may have been more or less consciously limited by that experience (cf. also Eckstein & Noah, 1993; Stobart, 2008). Furthermore, it is often overlooked that the structures and developments of school examination systems must be logistically and financially manageable within schools and colleges within relatively tightly constrained time frame and human resource limitations (International Examination Officers’ Association (IEOA), 2018; Mathews, 1985; Waterfield & West, 2010; Wolf, 1995; Woods, 1998, 2007). Fairness in assessment: Standardisation versus diversificationLinked to all of the identified purposes of high-stakes educational assessment, there has been a frequently articulated value placed upon fairness, or the avoidance of bias in the outcomes and processes of high-stakes assessment (James & Hannah, 2017; Scott, Webber, Lupart, Aitken & Scott, 2014; Stobart, 2005, 2008; Woods, 2003, 2007). Stobart (2008) identified three basic considerations: clarity of assessment purpose; fitness-for-purpose of the assessment method; and intended and unintended consequences of the assessment. Most importantly, Stobart (2008) observed that assessments may inadequately represent, or sample, assessment objectives, or measure something other than what is outlined within the objectives, or measure in a format which undermines the assessment purpose. Notably, Brooks, Case and Young (2003) examined the case for time limits within state-wide testing in the United States and identified an unresolved tension between the desire for standardisation of testing conditions, and optimal and fair student attainment demonstration (cf. also Peacey & Peacey, 2007; Duncan & Purcell, 2017). Similarly, Brookhart and Bronowicz (2003) revealed from student observation how assessments may in reality confound the measurement of attainment with a range of associated, but separate, skills such as study skills, memory capacity, speed of assimilation and handwriting speed or capacity. In the context of the dominant inscriptive medium of examinations, Woods (2003, 2004) questioned the teaching of ‘examination technique’ and whether the commonplace acceptance of this construct signifies some implicit and unstated assessment objective(s). Such research has highlighted the equal importance of scrutinising what may be explicit and implicit within an assessment scheme and its associated objectives (clarity of purpose), and from this evaluating the media, content, conditions, and available differentiation of the context in which the objectives are to be examined (fitness-for-purpose) (cf. Foulkes, 2003, cited in Peacey and Peacey, 2007). Against a background of continuing public concern about social biases in educational outcomes (e.g. Francis, Mills & Lupton, 2017; Legewie & DiPrete, 2012), Stobart (2008) also highlighted how the content and process of examinations may affect accessibility in a way which has differential consequences for students of different attainment levels, and those from different social and gender groups. Stobart (2005) has questioned whether the dominant inscriptive medium of school examinations, and educational assessments more widely, has disadvantaged certain cultural groups in which there may be greater use of oral communication. We would extend this to consider the relevance of a predominantly manually inscriptive medium of school examinations to the demands of contemporary employment and further or higher education. In reality, many employees, and most higher level academic students, will produce their most important work using technological hardware and software support, and with some degree of flexibility in the availability of time and other provisions, allowing sufficient time and energy for important checking and editing. Stobart (2005) called for assessment systems to review and develop their approaches to student diversity (although arguably the direction of educational assessment policy in the UK since that time has been towards a narrower, less varied approach – e.g. reduction of coursework; emphasis on terminal examinations; ending of controlled assessments (DfE, 2014a; Marshall, 2017) (cf. also Peacey & Peacey, 2007). Similarly, Scott, Webber, Lupart, Aitkin and Scott’s (2014) wide-ranging study of student assessment in Canada, found that many teachers, students and parents, expressed dissatisfaction with the assessment approach for gifted students and students with special needs. However, whilst some commentators have linked increased diversification of assessment formats to the reduced currency of a given award (IEOA, 2018; Mathews, 1985; Eckstein & Noah, 1993), Peacey and Peacey (2007) highlighted the development of ‘inclusive assessment’ design in the United States and previous developments in England by the (now defunct) National Assessment Agency, which stipulated that national tests should be manageable for all students and amenable to completion by all students within the examination time allowance. Fairness in assessment: Accommodations and the individual perspectiveUnder the auspices of national disability and equalities legislation, assessment accommodations or access arrangements are usually put in place for students with disabilities, or additional or special needs, to provide to such candidates a ‘level playing field’ (Cumming, 2008; Stobart, 2008; James & Hannah, 2017; Joint Council for General Qualifications (JCQ), 2017). However, from scant data available, parents, young people and teachers have all expressed dissatisfaction with the provisions of assessment accommodations (Scott et al., 2014; Woods, 2007; Woods, Parkinson & Lewis, 2010), which arises, we propose, from ‘group-based’ constructions of fairness arising from the concern with standardised examination conditions. In respect of assessment accommodations, a recent group comparison study by James and Hannah (2017) of the effectiveness of an accommodation for dyslexic students in Ireland raises two important points for consideration in relation to fairness and the perspective of the individual examination candidate. The researchers framed their study of the spelling and grammar waiver (SGW) accommodation within the interaction (or differential boost) hypothesis, which proposes that a legitimate examination accommodation for a disabled student should improve the assessment outcome for the disabled student but not confer any advantage (boost) to a non-disabled student (James & Hannah, 2017; Phillips, 1994; Sireci, Scarpati & Li, 2005). Since the findings of their study showed a mean boost for both the dyslexic and non-dyslexic group, James and Hannah (2017) conclude that the SGW accommodation is ‘not valid’. However, this might not be the conclusion from the point of view of any individual candidate for two reasons. First, it can be argued that where non-disabled candidates also benefit from a special accommodation, and assuming that the accommodation rightly does not infringe the explicit assessment objectives for any candidate, there is likely an implicit assessment objective operating for all candidates. Since all students are subject to that implicit assessment objective, the fair solution is either to allow the provision to all candidates, or to make explicit and specific within the assessment objectives the spelling and grammar requirements of the examination for all students. Conclusions drawn from research designed according to the interaction hypothesis would lead analogously to a conclusion that where disabled building and bathroom access also support accessibility for any non-disabled person, they are not appropriate. For example, automatic doors are often put in place for disability access and are used by the non-disabled. We contend that misconception of the utility of the interaction hypothesis is premised upon an inadequate or incomplete specification of an examination’s assessment objectives, leading to a retrograde treatment of the accommodation as a ‘concession’ to the disabled candidate, rather than as an access arrangement (Elliott & Roach, 2002; Hedderley, 1992; Peacey & Peacey, 2007; Woods, 2003). Second, James and Hannah’s (2017) analysis at the group level overlooks their report that some individual students in both groups gained a boost from the SGW accommodation whilst others in each group did not gain any boost. Indeed for some, test scores fell with provision of the accommodation; therefore, in some dyslexic/non-dyslexic pair comparisons of the application of the SGW accommodation, the dyslexic candidate did gain a higher score whereas the non-dyslexic candidate did not. In short, the analysis fails to reconcile the general findings with the idiographic issue of fairness for the individual candidate for whom an allowable provision may support enhanced access (Miller & Frederickson, 2006). The appropriate positioning of individual candidate experience is pertinent also to the issue of how ‘eligibility’ for examination accommodations is understood for some provisions. In order to avoid examination accommodations conferring an ‘unfair advantage’ upon a recipient - a puzzling tenet where an accommodation is allowable to another candidate in the same examination - the application processes for assessment access arrangements (accommodations) in England, Northern Ireland and Wales, stipulate disability/ difficulty level criteria, irrespective of attainment demonstration needs in the context of any specific examination (JCQ, 2017). These criteria stipulate that for a candidate with learning difficulties to access extra time for an examination they must present with: ‘at least one below average standardised score of 84 or less which relates to an assessment of: speed of reading; or speed of reading comprehension; or speed of writing; or cognitive processing measures, which have a substantial and long term adverse effect on speed of working’ or ‘has at least two low average standardised scores (85-89) which relate to two different areas of speed of working’ (JCQ, 2017 p.22). However, the construction of ‘substantial impairment’ (JCQ, p.22) in this way obscures the functional evidence of actual attainment demonstration need of an individual candidate, as highlighted by these teachers’ comments: ‘We have several students who just do not qualify but deserve the chance to show what they can do’ (Woods, 2007, p.93). Put simply, a candidate with very much attainment to demonstrate may conceivably be at a disadvantage even with speeds of working at low average levels. Theoretically, the such eligibility criteria are premised upon a notional discrepancy between processing measures and a proxy attainment demonstration mean standardised score (i.e.100), though discrepancy criteria for the identification of learning difficulties have long since been scientifically discredited, and are no longer simplistically presented within diagnostic criteria (e.g. American Psychiatric Association (APA), 2013; Restori, Katz & Lee, 2009; Vellutino, Scanlon, & Lyon, 2000). A further question can be raised about the linking of eligibility for examination accommodations with substantial impairment. From the perspective of an individual candidate, whose attainment is to be finely graded through narrow grade boundary ranges, modest or even small levels of impairment may be of personal and practical significance. The issues of group-based comparison and rightful primacy of the explicit assessment objectives in the assessment of an individual student are succinctly expressed by this teacher participant in Woods’ (1998) survey of examination accommodation processes, as she complained about the disallowance of a special provision to an individual student on the basis of normative, non-examination student data: ‘…all [examination accommodations] would have given him would have been more of a chance to answer the questions because he would have someone to read them to him. It’s not sort of affecting the achievements of any other student and it’s not giving a false picture of him either because he might just get a G [lowest grade]’ (p.199). Fairness in assessment: Children’s rights and assessment needs The development of special accommodations, or access arrangements, within examination systems has often been explicitly linked not only to the notion of fair assessment but also, through national legislation, to the rights of children and young people with disabilities under the auspices of the United Nations’ Conventions on the Rights of the Child (UNCRC)(UN, 1989) and on the Rights of Persons with Disabilities (UNCRPD) (UN, 2007) (e.g. UK HM Government, 2010 - Equalities Act; JCQ, 2017). More generally, Elwood and Lundy (2010) have also identified the similarity between principles of fair assessment and questions relating to children’s rights within policy and practice of educational assessment. Notably, the UK Government’s most recent guidance on special educational needs and disability does make reference to both the UNCRC and UNCRPD and stipulates a range of participatory provisions. Such provisions relate to teaching and learning, and multi-agency cross-sector co-ordination, at both individual student and strategic or whole service levels (e.g. person centred planning, utilising service user feedback), but the guidance does not make any similar considerations in respect of educational assessment (UK HM Government, 2015). However, English examinations guidance did, between 1995 to 1998, reference extensively the concept of special assessment needs (JCGCSE, 1995; JCQ, 1998). Under current guidance, special educational needs are defined exhaustively and contextually as those which call for special educational provision to be made for an individual student (UK HM Government, 2015). By analogy then, special assessment needs would arguably be a relevant concept to any student for whom ordinary assessment provisions were inadequate, and would focus upon all of the allowable assessment provisions of benefit to an individual candidate’s attainment demonstration, rather than those identified through comparisons with groups of ordinary candidates (e.g. James & Hannah, 2017; Sireci, Scarpati & Li, 2005). Like contemporary interpretations of special educational needs, the concept of special assessment needs would facilitate a central place for the candidate perspective within educational assessment, at both individual and strategic levels. Furthermore, just as the UK Government’s most recent Code of Practice on special educational needs (HM Government, 2015) takes as its starting point the provision and monitoring of high quality teaching for all students, a heightened concern for the development of inclusive and fair educational assessments for all students would likely follow from a mobilisation of the concept of special assessment needs (Woods, 2000; Peacey & Peacey, 2007). This would parallel the observed development of inclusive teaching practices for special educational needs (Hegarty, 1982; Wheldall, 1995; Mcleskey & Waldron, 2007). For example, in an examination without specified speed completion assessment objectives, special assessment needs arising from time insufficiency might bring into sharp focus the adequacy of ordinary assessment provisions and possibly recalibrate the boundary between ordinary and special examination provisions. Notably, Peacey and Peacey (2007) highlighted the likely benefits of readers and extra time to very many candidates in national curriculum assessments. We therefore propose that, just as the more generic concept of special educational needs is utilised to support children’s and families’ participation within teaching and learning processes at different levels, the more specific concept of student assessment needs could be remobilised to create opportunities for dialogue and participation within developments of educational assessments. Promoting dialogue with students about educational assessmentWe have argued that children’s rights are systematically occluded from the development of systems of formal educational assessment by a focus on the school accountability purposes of assessment; by a group-based, rather than individual perspective on fair assessment; and by conceptualisations of educational need which focus upon processes of teaching and learning, rather than assessment. Considering the constraints imposed by the already resource-intensive processes of educational assessment systems, it is perhaps not surprising then that systems for student feedback about the experience of formal educational assessments, such as England, Northern Ireland and Wales’ GCSE, are conspicuous by their absence. Resourcing for systems of mass student user feedback on such examinations would exert a considerable resource demand upon schools and awarding bodies alike. More importantly, perhaps, given the multiple and sometimes competing purposes of formal educational assessment there would be no driver for this either from within schools or awarding bodies. Arguably, those who would be in a position to mobilise the organisation of student feedback would neither know what to do with it, nor indeed what to ask in the first place: ‘In educational and psychological testing situations it is almost universal for us not to collect feedback. We do not want it. We are not sure what we would do with it if we got it. But I suggest that we have not pursued feedback because we are reluctant to receive it.’ (Davis, 1993, p. 236).It is hard to see that much has changed in the 25 years since Davis’ (1993) challenging statement, leading to the conclusion that, in respect of educational assessment, Hall’s (2017) aspiration to place educator-student dialogue at the centre of co-produced educational transformation, is lacking a fundamental first step of eliciting the students’ voices. (It should be noted that a handful of resourceful (or resourced) students have contributed to some open consultations on educational assessment, e.g. DfE, 2017). Even in relation to access arrangements for GCSE examinations, which are, through the provisions of disability rights specified within UNCRC and UNCRPD, more closely linked with the rights of young people within educational assessment, it is notable that the English examinations regulator Access Consultation Forum, which advises upon the development of special provisions for GCSE candidates with disabilities or special educational needs, welcomes dialogue from all of the GCSE awarding bodies, and from disability groups, but not specifically from representatives of examination candidates themselves (Qfqual, 2017b). (Interestingly, none of the UK GCSE awarding bodies’ websites currently host a section/ page for examination candidates.) At the same time, academic research on the student perspective on educational assessment is relatively sparse, sporadic, and generated from outside the assessment systems upon which it focuses, but, where it does exist, is invariably illuminating and potentially useful. In the US, Brookhart and Bronowicz (2003) surveyed student views on their classroom assessments, revealing the importance of subject value and students’ strategies for developing self-efficacy. In the UK, with the aim of elucidating ‘individual examination functioning’, Woods (2000) examined the perceptions of assessment needs of GCSE students who did not have special educational needs. He found that 10% of students liked some aspect of their examinations, that 71% reported running out of time in at least one examination paper, and that a majority of students did not prefer the strictly timed, inscriptive mode of assessment which they experienced most often. (Woods (2003) reported an empirical investigation in which non-disabled students did indeed make substantial and effective use of extra time and breaks within GCSE trial examinations.) Amongst other improvement suggestions, the students reported that they would like greater time allowance flexibility, the availability of a break, and the facility to look up difficult words for meaning and/or spelling. Woods’ (2000) study also revealed a student perception that examinations were unfair for some students as an attainment demonstration opportunity, but fair in the sense that the regime was applied to everyone. Most recently, Barrance and Elwood (2018) have referenced children’s participatory rights in their survey of students upon national GCSE reform in Northern Ireland and Wales. The researchers found that students expressed clear views about recent reforms, supporting retention of modular and linear courses, controlled assessments and tiering; students felt strongly that they should be consulted on matters of educational reform. One likely implication of these findings is that students in England, if consulted, might not support the recent GCSE reforms, which were devised in the absence of a bank of student feedback, or indeed any specific consultation at the planning phase. Interestingly, Flitcroft, Woods and Putwain (2017) found that teachers acknowledged little previous insight to student assessment feedback but, when provided with it, were unequivocal in identifying its utility to their practice, at both classroom and school levels. Whilst such findings suggest a promising start to teacher-student dialogue, continuing ‘transformations’ from this (cf. Hall, 2017) would require an extensive system of action planning and review, in tandem with ongoing student feedback organised through schools, awarding bodies and regulators. Towards ‘rights respecting’ educational assessment: Levers for change in situating student voice The available research on student experience and assessment needs suggests that dialogue between students, examination boards, and assessment regulators about students’ educational assessment experience and assessment needs will privilege a focus upon the purpose of examinations as a fair attainment demonstration opportunity for the student (Flitcroft et al., 2017; Scott et al., 2014; Woods, 2000). Arguably, it is this perspective which is less easily accessed by examination boards and regulators, either directly or through teachers or teacher representatives, particularly in the absence of systematic school processes to gain student feedback data. From a survey of secondary school teachers in England, Woods (2007) found that only 25% of teachers thought that GCSE access arrangements were fair, and 70% would call for extension of access arrangements such as extra time, readers, scribes, and use of word processors, though it is not clear how these views may correspond to those of students themselves. Furthermore, teacher feedback from Woods (1998) identified the difficulty that teachers may have in identifying student assessment needs objectively against a background of contextual knowledge. We have argued that the elicitation of students’ voices in respect of their educational assessments is their participatory right, and an essential element for student-educator, or student-educationist, dialogue which should inform and transform students’ experiences of assessment (cf. also Woods, 2010). Barrance and Elwood (2018) have called for students to be consulted upon planned reforms in educational assessment; we would go further to suggest that they should indeed be fully involved in the planning of any such reforms. Hall (2017) posited that the voice of children and young people, and the opportunity for meaningful dialogue, is less highly valued in some areas than in others. From the evidence of lack of student feedback and representation, we propose that educational assessment is indeed one such area (cf. also Elwood & Lundy, 2010), and so a strong and visible mandate for comprehensive examination candidate participation is needed. Just as many individual schools have developed ‘rights respecting’ provisions and ethos under UNICEF’s Rights Respecting Schools Award (RRSA) (UNICEF-UK, 2017), we propose development by schools, awarding bodies and regulatory bodies of systems for ‘rights respecting educational assessment’. Integral mass feedback from all student ‘users’ of educational assessments in school, such as the GCSE in England, Northern Ireland and Wales, should be the starting point for student participation in the aims, processes and development of educational assessment, and would also provide an opportunity to relate such feedback to other rights-based principles, such as assessment provisions being in a child or young person’s best interests and being relevant to later life (UN 1989; 2001; Stobart, 2008). However, educational assessment is notoriously resistant to transformation on account of the established balance of its multiple purposes and constraints and the perceived fairness of standardisation of examination conditions (Eckstein & Noah, 1993; Elwood & Lundy, 2010; Mathews, 1985; Scott et al., 2014). Stobart (2008) outlined an agenda for ‘reclaiming’ the purposes and scope of assessment but, almost a decade later, Benn (2017) bemoaned the continued rise of examinations as devices for selection, and Marshall (2017) criticised the diminishing influence of the teaching profession in England in developing student assessment that is relevant to a wide range of students and to the teaching programmes which precede it. Peacey and Peacey (2007) and Woods (2010) have both argued the importance of developing assessment systems through an independent programme of commissioned empirical research, within which we would argue, the place for student feedback and meaningful participation can be found. Furthermore, dialogue with students about educational assessment does not imply an open ‘wish list’ of reforms; dialogue is bidirectional (Hall, 2017), and students are not the only stakeholders, and their attainment demonstration is not the only purpose, in high-stakes assessment. What dialogue with students about their educational assessment does afford, however, is a means to promote transparency and build trust between stakeholders, including students and awarding bodies. For example, students’ concerns about reports of unreliable marking and examination paper errors, about being ‘guinea pigs’ in the inception of assessment reforms, could all be managed by planned processes of dialogue (cf. Barrance and Elwood, 2018). Interestingly, by 2016 at least eight States of the USA had removed strict time limits in State-wide Common Core assessments following high levels of parent boycotting, which may incur reduction in federal funding for State education (Chapman, 2016; Harris, 2016; Wallace, 2016). State administrators have explained the change in relation to parental concern about the stresses that the tests impose upon those children who are unable to complete within the time limits, though commentators also speculated that some parental boycotting of the tests might have been politically motivated in relation to the role of children’s test scores in teacher evaluations (Chapman, 2016; Harris, 2016; Wallace, 2016). Two points are notable from these changes within the US Common Core assessment system. First, the changes have been met with some resistance by some stakeholders, possibly reflecting a perception of more favourable balance of interests within the status quo (cf. Hall, 2017). Second, these changes have occurred without any comprehensive feedback from, or direct involvement by, examination candidates themselves, though it may be that in any case the ‘adult’ perspectives may have themselves gained leverage through ‘political’ rather than fully democratic means. Debating the inclusivity of current assessment regimes, Stobart (2008), Marshall (2017) and Benn (2017) have each highlighted the significance of ‘political will’ in the development of forms of assessment. Alternatively, Cummings (2008) has pointed out that equity in educational assessment is often advanced on the basis of national case law, which often reflect changes in social values. She pointed out that case law precedents may have relevance between national jurisdictions, but also that educational assessment challenges are often preventable. In particular, Cummings (2008) has highlighted accommodations to the standard form of assessments as providing a possible future focus for legal challenge. We propose here that the proactive recognition of children’s participatory and other rights within educational assessment, with a focus upon their assessment needs, can provide a rational and democratic asset in system transformation. ConclusionThe different purposes of high-stakes educational assessments are conceptualised and balanced in a socio-political context and the constraints of mass inscriptive examinations. It is widely acknowledged that currently, the purposes of teacher and school accountability, and student selection under standard conditions, within the constraints of logistical and financial feasibility, are dominant, with some negative consequences to the experiences of teachers and children. In this paper, we have argued that a rightfully increased focus upon student participation, and children’s rights more generally, in educational assessment would give precedence to the purpose of high-stakes assessments as providing fair attainment demonstration opportunities for students, and within that, a view on the (almost novel) concept of students’ ‘assessment needs’. In light of this, we have proposed regular system-wide gathering of feedback from children on their high-stakes educational assessment experiences and needs, and the promotion of their participation in decision-making about the structures and processes of such assessments. Children’s participation in decision-making about high-stakes educational assessment will likely challenge taken-for-granted assumptions and raise novel questions, which may require more extensive use of research and consultation to steer more democratically its future developments.ReferencesAdderley, R.J., Hope, M.A., Hughes, G.C., Jones, L., Messiou, K., & Shaw, P.A. (2015). Exploring inclusive practices in primary schools: Focussing on children’s voices. European Journal of Special Needs Education, 30(1), 106-121.American Psychiatric Association (APA) (2013). Diagnostic and statistical manual of mental disorders – Fifth Edition. Arlington, VA: APA.Barrance, R., & Elwood, J. (2018). National assessment policy reform 14-16 and its consequences for young people: Student views and experiences of GCSE reform in Northern Ireland and Wales. Assessment in Education: Principles, Policy and Practice. Published online at Britto, P.R., & Ulkuer, N. (2012). Child development in developing countries: Child rights and policy implications. Child Development, 83(1), 92-103, doi: 10.1111/j.1467-8624.2011.01672.x.Benn, M. (2017). We are going in the wrong direction and we don’t know how to stop it. Keynote paper presented to the British Psychological Society (BPS) Division of Educational and Child Psychology, Harrogate, January, 2017.Brookhart, S.M., & Bronowicz, D.L. (2003). ‘I don’t like writing. It makes my fingers hurt,: Students talk about their classroom assessments. Assessment in Education: Principles, Policy & Practice, 10(2), 221-242. Brooks, T.E., Case, B.J., & Young, M.J. (2003). Timed versus untimed testing conditions and student performance. San Antonio, US: Pearson. Burton, D., Smith, M., & Woods, K. (2010). Empowering children through training as social scientists: an example of a direct contribution to the curriculum. Educational Psychology in Practice, 26(2), 91-104.Chapman, B. (2016, 28th January). Time limits on tests nixed for New York school kids. [New York Daily News]. Retrieved from: Children’s Rights Alliance for England (2016). State of children’s rights in England. Briefing 6: Education, leisure and cultural activities. London: Children’s Rights Alliance for England. Cumming, J.J. (2008). Legal and educational perspectives of equity in assessment. Assessment in Education: Principles, Policy & Practice, 15(2), 123-135. Davis, R. (1993) When applicants rate the examinations: feedback from 2000 people. In B. Nevo and S. R. Jager (editors), Educational and psychological testing: The test taker’s outlook. Goettingen (Germany): Hogrefe and Huber Publishers.Denscombe, M. (2000). Social conditions for stress: Young people’s experience of doing GCSEs. British Educational Research Journal, 26(3), 359–374. Department for Children, Schools and Families (DCSF) (2009). United Nations Convention on the Rights of the Child: Priorities for action. Retrieved from: Department for Children, Schools and Families (DCSF) (2010). Improving the quality of statements of special educational needs. Good practice in writing statements. Retrieved from: Department for Education (DfE) (2014a). Assessment curriculum and qualifications: Research priorities and questions. Nottingham: HMSO. Department for Education (DfE) (2014b). Listening to and involving young people. Retrieved from: Department for Education (2017). Developing new GCSEs, AS and A Levels for first teaching in 2017: Consultation outcome. Retrieved from: Duncan, H., & Purcell, C. (2017). Equity or advantage? The effect of receiving access arrangements in university exams on Humanities students with specific learning difficulties (SpLD). Widening Participation and Life Long Learning 19(2), 6-26.Eckstein, M.A. and Noah, H.J. (1993) Secondary school examinations: International perspectives on policy and practice. London: Yale University Press.Elliott, S.N., & Roach, A.T. (2002). The impact of providing testing accommodations to students with disabilities. Madison, US: University of Wisconsin, Wisconsin Centre for Educational Research. Elwood, J., & Lundy, L. (2010). Revisioning assessment through a children’s rights approach: Implications for policy, process and practice. Research Papers in Education, 25(3), 335-353.Fayette, R., & Bond, C. (2017). A systematic literature review of qualitative research methods for eliciting the views of young people with ASD about their educational experiences. European Journal of Special Needs Education (online first). 10.1080/08856257.2017.1314111Flitcroft, D., Woods, K., & Putwain (2017). Developing practice in preparing students for high-stakes examinations in English and Mathematics. Educational and Child Psychology, 34(3), 7-19.Foulkes, G. (2003). SENDA and equal access to assessment and qualifications. Access to assessment and qualifications conference, Cardiff, 21st March 2003.Francis, B., Mills, M., & Lupton, R. (2007). Towards social justice in education: Contradictions and dilemmas. Journal of Education Policy, 32(4), 414-431. Groundwater-Smith, S., & Mockler, N. (2016). From data source to co-researchers? Tracing the shift from ‘student voice’ to student-teacher partnerships in educational action research. Educational Action Research, 24, 159-176.Hall, V. (2017). A tale of two narratives: Student voice – what lies before us? Oxford Review of Education, 43(2), 180-193.Harding, E., & Atkinson, C. (2009). How EPs record the voice of the child, Educational Psychology in Practice, 25(2), 125-137.Harris, E.A. (2016, 27th January). New York will shed clock for some statewide tests. [The New York Times]. Retrieved from: Hedderley, R. (1992) Psychologists’ assessments of specific learning difficulties (dyslexia) and examination boards: policies and practices. Educational Psychology in Practice, 8(1), 32-42. Hegarty, S. (1982). Meeting special educational needs in the ordinary school. Educational Research, 24(3), 174-181.HM Government (2002). Education Act. Retrieved from: HM Government (2015). Special educational needs and disability code of practice: 0-25 years. Retrieved from: Herbert, A. (Ed) (1889) The sacrifice of education to examination: Letters from all sorts and conditions of men. London: Williams and Norgate.Higher Education Statistics Agency (HESA) (2017). First degree qualifiers by mode of qualification obtained, domicile, sex and class of first degree 2014/15 (Table L). Retrieved from: hesa.ac.uk International Examination Officers’ Association (IEOA). IEOA report on survey of 2017 summer exams period (26.1.18). Reading, England: IEOA.Isaacs, T. (2010). Educational assessment in England. Assessment in Education: Principles, Policy & Practice, 17(3), 315-334.James, K., & Hannah, E.F.S. (2017). The validity of the spelling and grammar waiver as a reasonable accommodation in leaving certificate examinations in Ireland. International Journal of School and Educational Psychology, DOI: 10.1080/21683603.2017.1302848Joint Council for the GCSE (JCGCSE) (1995) Special arrangements and special consideration for candidates with special assessment needs: regulations and guidance. Bristol: Joint Council for the GCSE.Joint Council for General Qualifications (JCQ) (1998) Examinations and assessment for GCSE and GCE: Regulations and guidance relating to candidates with special requirements. Cambridge: The Joint Council for General Qualifications.Joint Council for Qualifications (JCQ) (2017). Adjustments for candidates with disabilities and learning difficulties: Access arrangements and reasonable adjustments with effect from 1 September 2017 to 31 August 2018. London: JCQ.Lee, Y., & Svevo-Cianci, K.A. (2009). Twenty years of the convention on the rights of the child: Achievements and challenges for child protection. Child Abuse and Neglect, 33, 767-770. Legewe, J., & DiPrete, T.A. (2012). School context and the gender gap in educational achievement. American Sociological Review, 77(3), 463-485.Marshall, B. (2017). The politics of testing. English in Education, 51(1), 27-43.Mathews, J.C. (1985). Examinations: A commentary. London: George Allen & Unwin.Mcleskey, J., & Waldron, N.L. (2007). Making differences ordinary in inclusive classrooms. Intervention in School and Clinic, 42(3), 162-168.Miller, A., & Frederickson, N. (2006). Struggles and successes for educational psychologists as scientist practitioners. In D. Lane & S.E. Corrie (Eds.) The modern scientist-practitioner: A guide to practice in psychology. London: Routledge.Ofqual (2017a) Ofqual: About us. Retrieved from: Ofqual (2017b). Access Consultation Forum. Retrieved from: Ofqual (2017c). New GCSE grades 9-1 are here. Retrieved from: , N., & Peacey, L. (2007). Minimising bias in assessment for students with special educational needs and disabled students: Reasonable adjustments in written national curriculum tests at key stage 2 and key stage 3. London: Qualifiation and Curriculum Authority (QCA).Philips (1994) S.E. (1994). High stakes testing accommodations: Validity vs. disabled rights. Applied Measurement in Education, 7(2), 93-120.Restori, A.F., Katz, G.S., & Lee, H.B. (2009) A Critique of the IQ / Achievement Discrepancy Model for Identifying Specific Learning Disabilities. Europe’s Journal of Psychology, 4, 128-145.Roach, J. (1971) Public examinations in England 1850-1900. Cambridge: Cambridge University Press.Rodway, C., Tham, S.G., Ibrahim, S., Turnbull, P., Windfuhr, K., Shaw, J., Kapur, N., & Appleby L. (2016). Suicide in children and young people in England: A consecutive case series. The Lancet 3(8), 751-759.Scott, S., Webber, C.F., Lupart, J.L., Aitken, N., & Scott, D.E. (2014). Fair and equitable assessment practices for all students. Assessment in Education: Principles, Policy & Practice, 21(1), 52-70.Srieci, S.G., Scarpati, S., & Li, S. (2005). Test accommodations for students with disabilities: An analysis of the interaction hypothesis. Review of Educational Research, 75(4), 457-490. Stobart, G. (2005). Fairness in multi-cultural assessment systems. Assessment in Education: Principles, Policy & Practice, 12(3), 275-287.Stobart, G. (2008). Testing times: The uses and abuses of assessment. Abingdon, Oxon: Routledge.Tattersall, K. (1994). The role and functions of public examinations. Assessment in Educations: Principles, Policy & Practice, 1(3), 293-304.UNICEF-UK (2017). Rights Respecting Schools Award. Retrieved from: United Kingdom (UK) Children’s Commissioners (2015). Report of the UK children’s commissioners: UN Committee on the Rights of the Child - examination of the fifth periodic report of the United Kingdom of Great Britain and Northern Ireland. Retrieved from: Nations (UN) (1989). Convention on the rights of the child. Geneva: United Nations. Retrieved from . United Nations (UN) (2001). General Comment No.1: Article 29(1) The aims of education. Geneva: United Nations.United Nations (2007). Convention on the rights of disabled persons. Retrieved from: Vellutino, F. R., Scanlon, D. M., & Lyon, G. R. (2000). Differentiating between difficult-to- remediate and readily remediated poor readers: More evidence against the IQ- achievement discrepancy definition for reading disability. Journal of Learning Disabilities, 33, 223-238.Wallace, K. (2016, 4th April). Testing time at school. Is there a better way? [CNN]. Retrieved from: Waterfield, J., & West, B. (2010). Inclusive assessment: Diversity and inclusion – the assessment challenge. Plymouth, England: University of Plymouth. Waters, M. (2015). The Gove legacy: Where policy meets the pupil. In M. Finn (ed.) The Gove legacy: Education in Britain after the coalition (pp.63-74). London: Palgrave Pivot.Wheldall, K. (1995). Making inclusive education ordinary. British Journal of Special Education, 22(3), 100-104. White, J., & Rae, T. (2016). Person-centred reviews and transition: An exploration of the views of students and their parents/ carers. Educational Psychology in Practice, 32(1), 38-53. DOI: 10.1080/02667363.2015.1094652Williams, J., Ryan, J. & Morgan, S. (2014). Lesson study in a performative culture. In Teacher Learning in the Workplace: Widening Perspectives on Practice and Policy (pp. 141–157). Dordrecht: Springer. Wolf, A. (1995) Competence based assessment. Buckingham: Open University Press.Wolf, A. (2009). The role of the state in educational assessment. Address to the Cambridge Assessment Annual Conference, 19th October, Cambridge, UK.Woods, K. (1998). School processes in selection for GCSE special examination arrangements, Educational Psychology in Practice, 14 (3), 59-67.Woods, K. (2000). Assessment needs in GCSE Examinations: Some student perspectives Educational Psychology in Practice, 16(2), 131-140.Woods, K. (2003). Equality of opportunity within the examinations of the General Certificate of Secondary Education, Psychology of Education Review, 27 (2), 3-16.Woods, K. (2004). Deciding to provide ‘a reader’ in examinations for the General Certificate in Secondary Education (GCSE): considerations about validity and ‘inclusion’, British Journal of Special Education, 31 (3), 123-128.Woods, K. (2007). Access to the General Certificate of Secondary Education (GCSE) examinations for students with special educational needs: what is best practice?, British Journal of Special Education, 34(2), 89-95. Woods, K. (2010). ‘Controlled assessment’ for students with special educational needs (SEN) at GCSE: Assessment development matters (very much), Assessment and Development Matters, 2(2), 25-27.Woods, K., Parkinson, G., & Lewis, S. (2010). Investigating access to educational assessment for students with disabilities, School Psychology International, 31(1), 21-41. ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download