Changing assessment practices and the role of IT



APA-sherri

APA-joke

REV CHK

Revised 100607

Chapter 2.5

Changing assessment practices and the role of IT

Ola Erstad

University of Oslo, Oslo, Norway

Abstract

It is often argued that changes in curriculum have implications for assessment and vice versa. IT can be used to enhance existing (standardized) testing practices by making tests more adaptive to serve diverse needs. In addition the use of information technology in the curriculum often aims to contribute to the mastery of complex cognitive skills, which cannot be determined by means of simple, standardized tests alone. IT offers a range of new possibilities for assessment, such as the use of multimodal representations, digital portfolios and simulations. In addition IT is also a new knowledge domain in itself that needs to be assessed. This chapter will synthesize research on the potential of IT to change assessment practices as well as examine the consequences of increasing use of IT in the curriculum for the assessment of student learning.

Introduction

Assessment lies at the heart of education (Little & Wolf, 1996; Ridgway, McCusker & Pead, 2004). Assessment practices both reflect and influence the way we conceive and organise learning and teaching. Such practices have evolved to be an integrated mechanism that largely determines how the curriculum and education works (van den Akker, 2003). By using the metaphor of the curriculum spiderweb (see also Voogt, 2008 in this Handbook). Van den Akker argues the need for coherence and balance between curriculum components, such as content, goals, learning activities and assessment. Therefore it is highly important to examine assessment and how it is related to changes in education.

It is common to distinguish between summative and formative ways of assessment, or what is also described as assessment of learning and assessment for learning. The former is characterized as occurring at the end of a learning process evaluating what the student has learned and can perform on certain test procedures, while the latter is done during a learning process to support progress of learning among students. The role of formative and summative assessment and differences between the two have been the subject of much debate in recent years, which has also surfaced in debates about IT in education in the way new technologies might support assessment practices in different ways (Ridgway, McCusker & Pead, 2004). Major assessment strategies include standardized testing, adaptive testing, and peer-and self assessment. Multiple choice, classroom assessment and portfolio are among the most often used assessment formats today. In this chapter assessment is used as a general term incorporating a wide range of methods for evaluating student performance and attainment.

The increased implementation of new digital technologies in school settings not only makes us view traditional ways of assessment in new ways but also raises new issues of assessment. This chapter focuses on the impact of increased educational use of IT on assessment by synthesizing research on assessment and IT. For this end, a search through online data bases and key journals from the mid 1990s until today has been made. Books on assessment in general and assessment in relation to the use of new technologies have also been consulted.

This chapter reports about research on assessment; it has been structured in two sections. The first section highlights assessment as part of educational change and linked to different perspectives on learning. The second section, which is the main section of this chapter, reviews relevant literature on assessment and IT, and has been structured according to assessing ‘what’ and ‘how’ related to the role of IT. The purpose of this chapter is to question to what extent and in which ways the use of new technologies represent changing assessment practices in school based settings.

Teaching, Learning and Assessment

The dominating assessment system over the last century, with an emphasis on standardized tests, reflects the development of mass education. The factory metaphor has been used (Olson, 2003) to show how students were required to master, largely through memorization, specific contents defined by textbooks and teachers. Examinations were developed for the purpose of getting feedback about students’ performance so as to stratify and/or certify them accordingly. Even though there have been changes in the way learning is done in schools, our assessment system has not changed accordingly. A question often raised in recent years is whether the introduction of IT and the challenges of the information society will change existing assessment practices.

Assessment indicates what is rewarded in a culture, and thereby how learning and knowledge is defined. There has been an increased understanding of the relationship between assessment and learning (Black & Broadfoot, 2004; Gipps, 2002), which is defined differently in different schools of learning theories.

In the behaviourist tradition, where the learner is seen as a passive receiver of knowledge delivered within specific subject areas, assessment is directed towards checking whether students can perform according to certain predefined measurements of appropriate responses. Examples are multiple choice and standardized achievement tests which focus on facts and predefined fragments of content. Technologies, like ‘teaching machines’ in the 1960s and the use of CD-ROMs in the 1990s, have been seen as part of assessment in this perspective, with specified procedures and feedback possibilities on responses made by students.

In the constructivist tradition of learning, where the learner is seen as a more cognitively active participant in the learning process than the former tradition, assessment focuses on more complex processes of learning by the individual. These processes require diverse approaches to assessment of learning, such as assessment of essays or projects, and performance assessment. Performance assessment, also known as alternative or authentic assessment, is a form of testing that requires students to perform a task rather than select an answer from a ready-made list. For example, a student may be asked to explain historical events, generate scientific hypotheses, solve math problems, converse in a foreign language, or conduct research on an assigned topic. In the past decade there have been several projects which have attempted to develop technologies as tools for assessment within a constructivist tradition, for example tracking students reasoning by using simulations in science education or by playing educational games (Kafai & Resnick, 1996). This is partly due to the fact that technological developments have made it possible to develop interactive tools to assess complex cognitive skills which can include different modalities of expressions, combining written text, pictures, video, simulations and so forth.

The sociocultural tradition of learning (Wertsch, del Rio & Alvarez, 1995), with an emphasis on learning as social practice, has become increasingly influential in the last fifteen years, although probably more in theory than in school practice. The major difference from the constructivist tradition is the emphasis on collaboration and communication between people (inter-psychological) rather than the individual cognitive processes (intra-psychological) per se. In this way it has some common approach to learning with a socio-cognitive perspective arguing for interaction as a unit of analysis. Describing assessment building on a sociocultural tradition, Gipps (2002) states that “the requirements are that process should be assessed as well as product, that the conception be dynamic rather than static, and that attention must be paid to the social and cultural context of both learning and assessment” (p. 74). Compared with the two other traditions mentioned above, this perspective links learning and assessment more to the world around, and thus to how our culture is changing. In this way it also relates to what is called authentic assessment and performance assessment, as mentioned above. This indicates a form of assessment where students are asked to perform real-world tasks that demonstrate meaningful application of essential knowledge and skills. In school settings this also implies that assessment methods focus more on interpersonal ways of learning than the intrapersonal and how teaching challenges students’ learning processes in different ways. Regarding new digital technologies, this perspective sees tools and technologies as embedded in the ways we learn, for example by the use of digital portfolios. Given the competencies needed for the information society (Anderson, 2008), it is clear that the broader and more complex approaches to assessment represented by this perspective are becoming more relevant.

Assessment Practices, IT and Change

IT and educational change has proved to be more complex than initial expectations (Cuban, 2001). Most of the research in this field has been on curriculum changes, learning environments, students’ learning and the organization of schooling as a consequence of the implementation of IT. To a lesser extent research has focused on assessment and IT. Although assessment in education is a substantial research field it is only during the last decade that IT-based assessment has been growing as a research field (McFarlane 2003), partly due to an increase in developments of IT infrastructure in schools and access to hardware, software and broadband Internet connection for students and teachers.

How to introduce substantial educational change and improve quality in education has been the concern of educational planners, in recent years shifting from input to outcomes in terms of learning achievement (Kellaghan & Greaney 2001).

The introduction of IT in our educational system has to relate to overall issues of educational change and be seen as embedded in the context of curriculum issues such as goals, content and methods of learning (van den Akker, 2003). New digital technologies in schools can partly be seen as a way of improving students learning, and partly be seen as a catalyst for systemic change in schools (Erstad, 2004).

Existing research has examined both the impact of IT on traditional assessment methods and how IT raises new issues of assessment. As part of the Second International Technology in Education Study (see also Nachmias, Mioduser & Forkosh-Baruch, 2008; and Voogt, 2008), innovative IT-supported pedagogical practices were analyzed. In several countries some of the involved pedagogical practices showed a shift towards more use of formative ways of assessment when IT is introduced (Voogt & Pelgrum, 2003). However, in most practices often old and new assessment methods coexisted (ibid.), because schools had to relate to national standards and systems over which they have no control, while at the same time they are developing alternative assessment methods for their own purposes.

Different conceptions of IT and assessment

Overview

In the following three subsections, relevant research both on how IT might change assessment practices and how different aspects of student learning can be assessed are presented. The three subsections are defined by the way IT is conceived in assessment of student learning. See also Table 2.x. for an overview of this section.

Table 2.x

Overview of the section

|What |How ( role of IT) |

|Traditional goals and objectives |- IT for processing large numbers of tests |

| | |

| |- IT for new approaches, e.g. adaptive testing. |

|New goals: |- digital portfolios |

|higher order thinking skills | |

|‘life long learning skills’ |- for example technological tools in science education |

| |(.uk/projects) |

| | |

| |- multimodal products made by students, for example as part of |

| |‘digital storytelling’ () |

|IT literacy skills |- performance assessment tasks measuring competencies in using |

| |and reasoning with IT-tools |

| | |

| |- using IT assessment framework like "UNESCO's ICT Competency |

| |Standards for Teachers" (ICT-CST) |

Traditional goals and objectives

At a time when concerns are being raised about the workload on teachers and costs of education, methods aimed at reducing the weight of assessment demands in the classroom are to be welcomed. In addition the question can be raised as to what extent and in which ways IT could contribute to the improvement of assessment practices and make assessment more adaptive to serve various needs. The potential of item banking for assessment practices is one example by providing a solution to measuring items and categories across different domains of computer use in schools (Rudner, 1998; Van der Linden & Glas, 2000). Ways in which IT can improve assessment has also been triggered by developments in online learning where courses and assessments are done online.

Many countries and states have adopted a ‘dual’ program of both computer-based and paper-and-pencil tests. Raikes and Harding (2003) mention examples of such dual programs from some states in the US where students switch between answering computer-based and paper-and-pencil tests. They argue that the need to be fair to students regardless of their schools’ technological capabilities and the requirement to avoid sudden discontinuities so that standards may be compared may require a transitional period during which computer and paper versions of conventional external examinations run in parallel. They sketch some of the issues (costs, equivalence of test forms, security, diversity of school cultures and environments, and technical reliability) that must be solved before conventional examinations can be computerised.

Based on their own research on the state of Kansas’ large scale assessment program limited to middle-level mathematics, Poggio, Glasnapp, Yang, and Poggio (2005) argue that change can be enacted in schools that are ready to implement computer-based testing without upholding the paper-and-pencil modality. In a meta-evaluation of initiatives in different states in the US, Bennett (2002) shows that the majority of these states have begun the transition from paper-and-pencil tests to computer-based testing with simple assessment tasks. He concludes, “If all we do is put multiple-choice tests on computer, we will not have done enough to align assessment with how technology is coming to be used for classroom instruction” (pp. 14-15).

Recent developments in assessment practices can be seen as a more direct response to the potential of IT for assessment. An example of such developments is the effort to use computers in standardized national exams in the Netherlands. This goes beyond simple multiple choice tests. It has so far been tried out in science education where exams contain 40% physics assignments which have to be solved with computer tools like modeling, data video, data processing and automated control technique (Boeijen & Uijlings, 2004).

A major concern in much of the research on IT and assessment has been on the transition from paper-and-pencil based to computer-based assessment. Several studies trying to compare specific paper-and-pencil testing with computer-based testing have described the latter as highly problematic, especially concerning issues of test validity (Russell, Goldberg & O’Connor, 2003). Findings from these studies, however, show little difference in student performance (Poggio et al., 2005), even though there are indications of enough differences in performance at individual question level to warrant further investigation (Johnson & Green, 2004). There are differences in prior computer experience among students and items from different content areas can be presented and performed on the computer in many different ways, which have different impacts on the validity of test scores (Russell et al. , 2003). While some studies provide evidence of score equivalence across the two modes, computerized assessments tend to be more difficult than paper-and-pencil versions of the same test. Pommerich (2004) concludes that the more difficult it is to present a paper-and-pencil test on a computer, the greater the likelihood of mode effects to occur. Previous literature (Russel, 1999; Pommerich, 2004) seems to indicate that mode differences typically result from the extent to which the presentation of the test and the process of taking the test differ across modes, rather than from differences in content. This may imply a need to try to minimize differences between modes. A major concern is whether computer-based testing meets the needs of all students equally and whether some are advantaged while others are disadvantaged by the methodology. In short, there have been an increasing number of initiatives in studying how computer-based assessment can be compared to, and ultimately might replace paper-based assessment. However, as reported, there are several concerns of test validity and mode effects that restrict such transitions, resulting in the parallel use of both procedures. What is needed is not less quality criteria for alternative ways of assessing student performance, such as portfolios, but to look for new ways of making student attainment visible in a valid and reliable way (Gipps & Stobart, 2003).

New goals

New technologies have created a new interest in what some describe as ‘assessing the inaccessible’ (Nunes, Nunes & Davis, 2003), that is metacognition, learning strategies, attitudes and lifelong learning skills (Anderson, 2008; Deakin Crick, Broadfoot & Claxton, 2004). The introduction of IT in education has further developed an interest in formative ways of assessment in order to better monitor and assess student progress. The handling of files and the possibility to use different modes of expression (multimodality) support an increased interest for methods like project work (Kozma, 2003) also indicating an increased focus on formative assessment.

The increased use of digital portfolios in many countries (McFarlane, 2003) is an example of how formative assessment is gaining importance. The use of portfolio assessments is not new and has been used for some time without IT (see e. g., special issue in Assessment in Education, 1998, on ‘Portfolios and Records of Achievement’). However, in recent years, the use of digital tools seems to have developed this type of assessment further by bringing in some new qualitative dimensions such as possibilities for sending files electronically, hypertexts with links to other documents and multimodality with written text, animations, simulations, moving images and so forth. The focus in the design of digital portfolios is on developing structures for organizing and saving documents in a digital form. As a tool for formative assessment, and compared with paper-based portfolios, digital portfolios make it easier for teachers to follow students’ progress and comment on students’ assignments and keep track of documents. In addition digital portfolios are used for summative assessment as documentation of the product students have developed and the reporting of their progress. This offers greater choice and variety to the reporting and presenting of student learning (Woodward & Nanlohy, 2004).

An important point is also the way digital tools can support collaborative work. Students can send documents and files to each other and in this way work on tasks together. Within the field of computer-supported collaborative learning (CSCL), there are many examples of how computer-based learning environments for collaboration can work to stimulate student learning and the process of inquiry (Wasson, Ludvigsen & Hoppe, 2003). Collaborative problem-solving skills are considered necessary for success in today’s world of work and school. Online collaborative problem solving tasks offer new measurement opportunities when information on what individuals and teams are doing is synthesised along the cognitive dimension. This raises issues both on interface design features that can support online measurement and how to evaluate collaborative problem-solving processes in an online context (O’Neil, Chuang & Chung, 2003).

There are also examples of web-based peer assessment strategies (Lee, Chan & van Aalst, 2006). Peer assessment has been defined by some as an innovative assessment method since students themselves are put in the position of evaluators as well as learners (Lin, Liu & Yuan, 2001). It has been used with success in different fields such as writing, business, science, engineering and medicine.

A truly innovative example of IT and assessment which takes into consideration the affordances that new technologies might give is the eVIVA-project developed at Ultralab in the United Kingdom. The intention was to create a more flexible way of assessment taking advantage of the possibilities given by new technologies such as a mobile phone and web-based formative assessment tools. By using such tools Ultralab promoted self- and peer-assessment as well as dialogue between teachers and students. In this project the students had access to the eVIVA website where they could set up an individual profile of system preferences and recording an introductory sound file, on their mobile or land phone. After this students’ could then carry out a simple self-assessment activity by selecting a series of simple ‘I Can’ statements designed to start them thinking about what they are able to do in IT. The website consisted of a question bank from which the pupils were asked to select 4 or 5 questions for their telephone viva or assessment carried out towards the end of their course, but at a time of their own choosing. Students were guided in their choice by the system and their teacher. They had their own e-portfolio web-space in which they were asked to record significant ‘milestone’ moments of learning, and to upload supporting files as evidence. Each milestone were then annotated or described by the pupil to explain what they had learned or why they were proud of a particular piece of work. Once milestones had been published, teachers and pupils could use the annotation and the messaging features to engage in dialogue with each other about the learning. Students were encouraged to add comments to their own and each other’s work and the annotations could be sent via phone using SMS or voice messages. When ready, students would dial into eVIVA either by mobile or land phone, and record their answers to their selected questions. This gave students the opportunity to explain what they had done and reflect further on their work. Their answers were recorded and sent to the website as separate sound files. The teacher made an holistic assessment of the pupil’s IT capabilities based on the milestones and work submitted in the e-portfolio, student reflections or annotations, the recorded eVIVA answers and any written answers attached to the questions and classroom observations (see Walton, 2005).

The research findings from this project showed that both teachers and students experienced this as a new form of assessment procedure stimulating the students’ learning process. As mentioned earlier, one important aspect of how IT brings something new into the field of assessment is multimodality. Jewitt (2003) argues that unlike other media, computers bring different modes together. Computer applications and educational software introduce new kinds of texts into the classroom and these demand different practices of students (McFarlane, 2001). These developments pose some new challenges for assessment, which traditionally is mainly written. For example, related to the assessment of writing, how do we evaluate the coherence of a hypertextual essay or the clarity of a visual argument?

One area of research with great implications for how IT challenges assessment concerns higher-order thinking skills. Ridgway and McCusker (2003) show how computers can make a unique contribution to assessment in the sense that they can present new sorts of tasks, whereby dynamic displays show changes in several variables over time. The authors cite examples from the World Class Arena () to demonstrate how these tasks and tools support problem solving for different age groups. They show how computers can facilitate the creation of microworlds for students to explore in order to discover hidden rules or relationships, like virtual laboratories for doing experiments or games to explore problem solving strategies. Computers allow students to work with complex data sets of a sort that would be very difficult to work with on paper. Tools like computer-based simulations can in this way give a more nuanced understanding of what students know and can do than traditional testing methods (Bennett, Jenkins, Persky & Weiss, 2003).

Findings such as reported by Ridgway and McCusker (2003) are positive in the way students relate to computer-based tasks and the increased performances they do. However, they also find that students have problems in adjusting their strategies and skills since the result shows that they are still tuned into the old test situation with correct answers rather than explanations and reasoning skills.

In a systematic review of the impact of the use of IT on students and teachers for assessment of creative and critical thinking skills (Harlen & Deakin Crick, 2003), it is argued that the neglect of creative and critical thinking in assessment methods is a cause for concern, given the importance of these skills in the preparation for life in a rapidly changing society and for life long learning. The review shows a lack of substantial research on these issues and argues for more strategic research.

The use of new digital media in education has been linked to assessment of creative thinking as different from analytic thinking (Ridgway, McCusker & Pead, 2004). Digital camera and different software tools make it easier for students to show their work and reflect on it. A number of subjects in the school curriculum ask students to make various kinds of practices and arts based productions (Sefton-Green & Sinker, 2000). These might include paintings in Art, creative writing in English, performance in Drama, recording in Music, videos in Media Studies, and multimedia ‘digital creations’ in different subjects. There are so far not many examples of how IT influences assessment in this way (Sefton-Green & Sinker, 2000). However, these aspects of students’ knowledge and competencies as well as how IT is an integrated part of student learning and creative practices are important dimensions to keep in mind in conceptualising IT and assessment.

In this section we have seen how IT represents some new possibilities for developing assessment practices, especially formative assessment, and how the complexity of these tools can be used to assess higher order thinking skills, such as problem solving, that are difficult to assess by paper and pencil. As McFarlane (2001) notes, “It seems that use of ICT can impact favourably on a range of attributes considered desirable in an effective learner: problem-solving capability; critical thinking skill; information-handling ability” (p. 230). Such competencies can be said to be more relevant to the needs in the information society and the emphasis on lifelong learning than those which traditional tests and paper-based assessments tend to measure.

IT literacy skills

This section deals more directly with IT in schools as an area of competence in itself. IT literacy is analogous to reading literacy in that it is both an end and a means. At school young people learn to read and read to learn. They also learn to use IT and use IT to learn.

The ImpaCT2 concept mapping data from the UK strongly suggests that there is a mismatch between conventional national tests, which focus on pre-specified knowledge and concepts, and the wider range of knowledge which students are acquiring by carrying out new kinds of activities with IT in the home (Somekh & Mavers, 2003). By using concept maps and children’s drawings of computers in their everyday environments, the research generates strong indication of children’s rich conceptualisation of technology and its role in their world, for purposes of communication, entertainment or accessing information. It shows that most children acquire practical skills in using computers that are not part of the assessment processes that they meet in schools. Some research has shown that students who are active computer users consistently under-perform on paper-based tests (Russell & Haney, 2000).

EU countries, both on a regional and national level, and other countries around the world, are in the process of developing a framework and indicators to better grasp the impact of technology in education and what we should be looking for in assessing students’ learning using IT. (For example, see for EU, , for Norway, Erstad (2006) and for Australia, Ainley, Fraillon, Freeman and Mendelovits (2006)). According to the Summit of 21st Century Literacy in Berlin in 2002 (Clift, 2002), new approaches stress the abilities to use information and knowledge that extend beyond the traditional base of reading, writing and math, what has been termed ‘digital literacy’ or ‘IT literacy’.

In January 2001, the Educational Testing Service (ETS) in the US assembled a panel for the purpose of developing a workable framework for IT Literacy. The outcome was the report Digital Transformation. A Framework for ICT Literacy (International ICT Literacy Panel, 2002). Based on this framework, one can define IT literacy as “the ability of individuals to use ICT appropriately to access, manage and evaluate information, develop new understandings, and communicate with others in order to participate effectively in society” (Ainley, et al., 2006).

In line with this perspective, some agencies have developed performance assessment tasks of ‘IT Literacy’, indicating that IT is changing our view on what is being assessed and how tasks are developed using different digital tools. One example is the tasks developed by the International Society for Technology in Education (ISTE) called ‘National Educational Technology Standards’ (NETS, ) which are designed to assess how skillful students, teachers and administrators are in using IT.

In Australia, a tool has been developed with a sample of students from Grade 6 and Grade 10 to validate and refine a progress map that identifies a progression of IT literacy. The IT Literacy construct is described using three ‘strands’: working with information, creating and sharing information and using IT responsibly. Having students carry out authentic tasks in authentic contexts is seen as fundamental to the design of the Australian National IT Literacy Assessment Instrument (Ainley, et al., 2006). The instrument evaluates six key processes: accessing information (identifying information requirements and knowing how to find and retrieve information); managing information (organising and storing information for retrieval and reuse); evaluating (reflecting on the processes used to design and construct IT solutions and judgements regarding the integrity, relevance and usefulness of information); developing new understandings (creating information and knowledge by synthesising, adapting, applying, designing, inventing or authoring); communicating (exchanging information by sharing knowledge and creating information products to suit the audience, the context and the medium); and using IT appropriately (critical, reflective and strategic IT decisions and considering social, legal and ethical issues) (ibid.). Preliminary results of the use of the instrument show highly reliable estimates of IT ability.

There are also cases where an IT assessment framework is linked to specific frameworks for subject domains in schools. Reporting on the initial outline of a US project aiming at designing a ‘Coordinated ICT Assessment Framework’, Quellmalz and Kozma (2003) have developed a strategy to study IT tools and skills as an integrated part of science and mathematics. The objective is to design innovative IT performance assessments that could gather evidence of use of IT strategies in science and mathematics.

The above-mentioned projects and perspectives represent attempts at linking IT and assessment that are in the making. There are not many substantial research results to build on yet, but this will probably be a field of research that will grow in the years to come in relation to developing the 21st century skills.

Conclusion: Are We Changing Practices?

The aim of this chapter has been to look closely at assessment and the development of new information technologies and, the extent to which we can see examples of changing assessment practices as a consequence of these developments. The use of IT as part of assessment is still marginal in most countries, which is also reflected in the lack of research in this field.

The influence of IT on students’ attainment and learning has also been linked closely to studies of assessment (McFarlane, 2003), even though studying this link has not been a major objective in this chapter. It seems that asking 'What is the impact of IT on attainment?' is the wrong question, both as a basis for research and for justification of policy. Rather, what is required is to analyse the changes in ‘what’ has to be assessed and discuss how IT can support all relevant types of assessments.

In this chapter the presentation of research on assessment practices and IT has been structured into three areas, defined by the way IT is conceived as part of assessment of student learning. The first section shows that many initiatives of IT and assessment have not been about changing assessment practices, but to further develop traditional ways of assessment using the same set of criteria for what is being assessed, mostly summative ways of assessment. Studies trying to compare paper-and-pencil testing with computer-based testing are inconclusive. At the same time, there are initiatives aiming to replace paper-based assessment made possible by increased access to computers in schools, better security and developments in test-procedures adjusted to computer-based assessment. Moreover, there are also some examples explicitly showing developments in traditional assessment methods due to the use of IT, like scoring essays or using simulations.

The second section refers to research on the use of IT in assessing learning processes which otherwise could have been difficult to assess using traditional methods. These studies show that the use of IT is not only to support more formative ways of assessment, but can also be used to assess higher order thinking skills such as problem solving among students and life long learning skills that are difficult to assess by paper and pencil.

The third section deals with research in which assessment is linked to the introduction of IT itself and where digital literacy is seen as a knowledge domain. This is a new area of research with growing attention in many countries. ‘IT Literacy’ is being introduced into curricula in some countries, which calls for performance assessments in this area. As shown above, there have been a few research initiatives in this respect, exploring how IT is changing assessment practices.

These developments have implications for three different areas:

- Policy and curriculum development: In curriculum development, policy-makers and experts need to take into consideration not only the impact of IT on teaching and learning, but just as important, the influence of IT on assessment practices. So far this has been a neglected area. Without changing assessment practices using IT in formal policy documents and curricula, the ways in which IT is used in schools will be limited.

- Research: There is a need to develop studies with the affordances of IT more in focus. Most of the research so far has been directed towards the transition from paper-and-pencil to computer-based assessment. These studies do not really document changes in assessment practices. There is, however, some research available that represent a platform to build on. This implies that we need to look more in-depth into different aspects of IT applications and the new possibilities they might give, and to focus more on the ways these tools provide access to higher order thinking skills among students and their ‘digital literacy’.

- Teaching: Assessment practices have a direct influence on teaching in schools. There is a need to be clearer about the link between student learning, teaching practices and assessment. Changes in assessment practices have to be seen in connection with developments in the usage of IT, developments of different methods like project-based learning, teacher competences and learning communities.

Even though there seems to be teacher support for student-oriented assessment, the dominating test culture presses for other priorities (Broadfoot & Black, 2004). The influence of IT on assessment practices might represent a more fundamental break from traditional school practices and student learning, if we manage to grasp the full potential of using IT to enhance student learning for developing the 21st century competencies.

Acknowledgement

I would like to thank Fengshu Liu (University of Oslo) for assisting in the review process.

References

Ainley, J., Fraillon, J., Freeman, C., & Mendelovits, J. (2006, April). Assessing information and communication technology literacy in schools. Paper presented at the Annual Meeting of the American Educational Research Association, San Francisco, CA.

Anderson, R. (2008). Implications of the information and knowledge society for education. In J. Voogt & G. Knezek (Eds.), International handbook of information technology in primary and secondary education (pp. xxx-xxx). New York: Springer.

Bennett, R.E. (2002, June). Inexorable and inevitable: The continuing story of technology and assessment. Journal of Technology, Learning, and Assessment, 1(1). Retrieved June 7 2007 at

Bennet, R. E., Jenkins, F., Persky, H., & Weiss, A. (2003). Assessing complex problem solving performances. Assessment in Education: Principles, Policy & Practice, 10, 347-360.

Black. P. & P. Broadfoot (2004). Redefining assessment? The first ten years of assessment in education, Assessment in Education: Principles, Policy & Practice, 11, 7-26.

Boeijen, G. & P. Uijlings (2004, July). Exams of tomorrow, use of computers in Dutch national science exams. Paper presented at the GIREP Conference, teaching and learning physics in new contexts. Ostrava, Czech Republic.

Clift, S. (2002). 21st Literacy Summit White Paper. Retrieved June 7 2007 at do-wire@tc.umn.edu/msg00434.html

Cuban, L. (2001). Oversold and underused: Computers in the classroom. Cambridge, MA: Harvard University Press.

Deakin Crick, R. D., Broadfoot, P., & Claxton, G. (2004). Developing an effective lifelong learning inventory: The ELLI project. Assessment in Education: Principles, Policy & Practice, 11, 247-318.

Erstad, O. (2004). ‘PILOTer for skoleutvikling.’ (PILOTs for school development. Final and summary report of the PILOT project. 1999-2003). Report no. 28, ITU, Unipub:University of Oslo.

Erstad, O. (2006). A new direction? Digital literacy, student participation and

curriculum reform in Norway. Education and Information Technologies, 11, 415-429.

Gipps, C. V. (2002). Sociocultural perspectives on assessment. In G. Wells & G. Claxton (Eds.), Learning for life in the 21st century (pp. 73-83). Oxford: Blackwell Publishing.

Gipps, C. & Stobart, G. (2003). Alternative Assessment. In T. Kellaghan & D. Stufflebeam (Eds), International Handbook of Educational Evaluation (pp 549-576). Dordrecht: Kluwer Academic Publishers.

Harlen, W., & R. Deakin Crick. R. (2003). Testing and motivation for learning. Assessment in Education: Principles, Policy & Practice, 10, 169-208.

International ICT Literacy Panel (2002). Digital transformation. A framework for ICT literacy. Princeton, NJ: Educational Testing Service.

Jewitt. C. (2003). Re-thinking assessment: multimodality, literacy and computer-mediated learning. Assessment in Education: Principles, Policy & Practice, 10, 83-102.

Johnson, M., & Green, S. (2004, September). On-line assessment: The impact of mode on student performance. Paper presented at the British Educational Research Association Annual Conference, Manchester, England.

Kafai, Y. & M. Resnick (1996) Constructionism in practice. Designing, thinking, and learning in a digital world. Mahwah, NJ: Lawrence Erlbaum.

Kellaghan, T. & V. Greaney (2001). Using assessment to improve the quality of education. Paris: UNESCO, International Institute of Educational Planning.

Kozma, R. B. (Ed.) (2003). Technology, innovation, and educational change: A global perspective. Eugene, OR: International Society for the Evaluation of Educational Achievement.

Lee, E. Y. C., Chan, C. K., & van Aalst, J. (2006). Students assessing their own collaborative knowledge building. International Journal of Computer-Supported Collaborative Learning 1, 57-87.

Lin, S. S. J., & Liu, E. Z. F., & Yuan, S.M. (2001). Web-based peer assessment: feedback for students with various thinking styles. Journal of Computer Assisted Learning, 17, 420-432.

Little, A., & Wolf, A. (1996). Assessment in transition: learning, monitoring and selection in international perspective. London: Pergamon.

McFarlane, A. (2001). Perspectives on the Relationships between ICT and

Assessment. Journal of Computer Assisted Learning, 17, 227-234.

McFarlane, A. (2003). Assessment for the digital age. Assessment in Education: Principles, Policy & Practice, 10, 261-266.

Nachmias, R., Mioduser, D., & Forkosh-Baruch, A. (2008). Innovative practices using technology: The curriculum perspective. In J. Voogt & G. Knezek (Eds.), International handbook of information technology in primary and secondary education (pp. xxx-xxx). New York: Springer.

Nunes, C. A. A., Nunes, M. M. R., & Davis, C. (2003). Assessing the inaccessible: metacognition and attitudes. Assessment in Education: Principles, Policy & Practice, 10, 375-388.

Olson, D. R. (2003). Psychological theory and educational reform: how school remakes mind and society. Cambridge, England: Cambridge University Press.

O’Neil, H. F., Chuang, S., & Chung, G. K. W. K. (2003). Issues in the computer-based assessment of collaborative problem solving. Assessment in Education: Principles, Policy & Practice, 10, 361-374.

Poggio, J., Glasnapp, D. R., Yang, X., & Poggio, A.J. (2005, February). A comparative evaluation of score results from computerized and paper and pencil mathematics testing in a large scale state assessment program. Journal of Technology, Learning, and Assessment, 3(6). Retrieved June 7 2007 at

Pommerich, M. (2004, February). Developing computerized versions of paper-and-pencil tests: Mode effects for passage-based tests, Journal of Technology, Learning and Assessment, 2(6). Retrieved June 7 2007 at

Quellmalz, E. S., & Kozma, R. (2003). Designing assessments of learning with technology. Assessment in Education: Principles, Policy & Practice, 10, 389-408.

Raikes, N., & Harding, R. (2003). The horseless carriage stage: replacing conventional measures. Assessment in Education: Principles, Policy & Practice, 10, 267-278.

Ridgway J & S. McCusker (2003). Using computers to assess new educational goals, Assessment in Education: Principles, Policy and Practice, 10, 3, 309-328.

Ridgway, J., McCusker, S., & Pead, D. (2004). Literature Review of E-assessment (Report 10). Bristol, England: Futurelab.

Rudner, L. (1998). Item banking. Practical Assessment, Research & Evaluation, 6(4). Retrieved April 4, 2007 from

Russell, M. (1999). Testing on computers: A follow-up study comparing performance on computer and on paper. Education Policy Analysis Archives, 7(20). Retrieved June 7 2007 at .

Russell, M., Goldberg, A., & O’Connor, K. (2003). Computer-based testing and validity: a look into the future, Assessment in Education: Principles, Policy & Practice, 10, 279-294.

Russell, M., & Haney, W. (2000, March 28). Bridging the gap between testing and technology in schools. Education Policy Analysis Archives, 8(19). Retrieved June 7 2007 at

Sefton-Green, J., & Sinker, R. (Eds.) (2000). Evaluating creativity: making and learning by young people. London: Routledge.

Somekh, B., & Mavers, D. (2003). Mapping learning potential: students’ conceptions of ICT in their world. Assessment in Education: Principles, Policy & Practice, 10, 409-420.

Akker, J. van den (2003). Curriculum perspectives: an introduction. In J. van den

Akker, U. Hameyer, & W. Kuiper, W. (Eds.), Curriculum landscapes and trends (pp1-10). Dordrecht, the Netherlands: Kluwer Academic Publishers.

Van der Linden, W.J., & Glas, C.A.W. (2000). Computerized adaptive testing: theory and practice. Dordrecht, the Netherlands: Kluwer Academic Publishers.

Voogt, J., & Pelgrum, W.J. (2003). ICT and the curriculum. In Kozma, R. B. (Ed.) Technology, innovation, and educational change: A global perspective (pp 81-124). Eugene, OR: International Society for Technology in Education.

Voogt, J. (2008). IT and curriculum processes: Dilemmas and challenges. In J. Voogt & G. Knezek (Eds.), International handbook of information technology in primary and secondary education (pp. xxx-xxx). New York: Springer.

Walton, S. (2005). The eVIVA project: Using e-portofolio in the classroom. BETT, January 2005. Retrieved June 7 2007 at .uk/downloads/10359_eviva_bett_2005.pdf

Wasson, B., Ludvigsen, S., & Hoppe, U. (Eds.) (2003). Designing for change in networked learning environments: proceedings of the International Conference on Computer Support for Collaborative Learning 2003. Computer-Supported Collaborative Learning Series: Vol 2. Dordrecht, the Netherlands: Kluwer Academic Publishers.

Wertsch, J. V., Rio, P. D., & Alvarez, A. (Eds.) (1995). Sociocultural studies of mind. Cambridge, England: Cambridge University Press.

Woodward, H., & Nanlohy, P. (2004). Digital portfolios in pre-service teacher education. Assessment in Education: Principles, Policy & Practice, 11, 167-178.

Keywords

Assessment; Formative; Summative; Educational change; Information technologies (IT); Theories of learning

Index

Assessment; information technologies (IT); IT and assessment; educational change; summative assessment; formative assessment; portfolio; digital portfolio; standardized testing; adaptive testing; peer assessment; self assessment; multiple choice; classroom assessment; teaching; learning; assessment practices; IT and change; mode differences; computer-based tests; paper-and-pencil tests; affordances of IT; web-based peer assessment; higher-order thinking skills; life long learning; IT literacy; digital literacy; simulations; changing assessment practices

Glossary

assessment; digital portfolio; IT; IT literacy; formative assessment; portfolio; summative assessment

Ola Erstad

Affiliation: University of Oslo

Email: ola.erstad@ped.uio.no

Mailing address: Pb. 1092 Blindern, 0317 Oslo, Norway

Tel.: 47-22855216

Faks: 47-22858241

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download