University of North Carolina at Wilmington



Learning Connection—Teaching Guide | |

|Assessment: Using objective assessments |

|Introduction |Writing and using objective assessments |

|Different types of objective assessment |Evaluation of objective assessments |

|Different levels of understanding |Conclusion |

|Objective assessment and Graduate Qualities |Resources and references |

Introduction

Objective assessment is a generic term referring to tests where the marking is objective as there are clear right and wrong answers. Objective assessments can be summative, where marks are used to calculate a final grade for the student, or formative, where student efforts are met with feedback on their performance that does not directly contribute to final grades. Objective assessments can be set to test a wide topic area, albeit sometimes superficially. For large classes, it represents an efficient way of testing a large number of students rapidly and within short timeframes, particularly when computers are employed to assist marking. As with all forms of assessment it is necessary to align assessment with the desired learning outcomes for the course. The writing of appropriate questions (also called items) and answer options, including ‘distractors’, can be a complex exercise. This Guide outlines different types of objective assessments and examines their use in regard to the level of student understanding that they test and the desired graduate quality learning outcomes required. It also offers advice and further resources for writing and evaluating objective assessments that are fair and designed to allow students to accurately measure their level of understanding.

Different types of objective assessment

There are several types of objective assessment, falling into 3 main groups: true false, multiple choice and extended matching types.

True/False

True/False questions, in their simplest form, comprise a statement (or stem) that contains only one idea, to which the student selects either true or false in response. These questions can also be nested into a scenario, for example:

In a 3 year old female patient who has recently been diagnosed with acute lymphocytic leukaemia you would consider treatment with

T F Allopurinol

T F Cytosine arabinoside

T F Cytarabine

T F Methotextrate

Multiple choice

These objective assessments contain a question stem, and normally 4 answer options, one of which is correct and the others not. A variant of this is multiple response questions, where more than one option is correct. Multiple choice questions can contain more elaborate questions that require significant interpretation and analysis. These have been called ‘context dependent questions’.[i] Multiple question stems and options can arise from a single context. Contexts can include case studies, scenarios and may include graphical and tabulated data to be analysed.

Extended matching

Extended matching assessments normally consist of a group of questions that are written around a single theme. A longer list of answer options is used repeatedly for a series of question stems, each worth a mark. The student must select the most appropriate answer option from the one long list for each of the question stems.[ii]

Different levels of understanding

When looking at designing any assessment, it is worthwhile considering the depth of understanding the student will need to use to complete the task. John Biggs, in his book ‘Teaching for quality learning at University’ describes the range of understanding that students use in learning and relates them to verbs that are used in assessment. He writes,

High-level, extended abstract involvement is indicated by such verbs as ‘theorize’, ‘hypothesize’, ‘generalize’, ‘reflect’, ‘generate’ and so on. They call for the student to conceptualise at a level extending beyond what has been dealt with in actual teaching. The next level of involvement, relational, is indicated by ‘apply’, ‘integrate’, ‘analyse’, ‘explain’ and the like; they indicate orchestration between facts and theory, action and purpose. ‘Classify’, ‘describe’, ‘list’ indicate a multi-structural level of involvement: The understanding of boundaries, not but not of systems. ‘Memorise’, ‘identify’, ‘recognise’ are unistructural: direct, concrete, each sufficient to itself, but minimalistic.[iii]

He argues that most objective assessments, in particular multiple choice questions, prompt students to use superficial learning skills, where students are required to memorise facts and identify correct answers from incorrect ones. Studies have shown that students mostly use superficial study techniques to prepare themselves for multiple choice question texts.[iv] Partly for these reasons, it is recommended that summative objective assessments contribute only a proportion of the final grade, and the remaining assessments examine higher order understandings. It is also advised that a considerable amount of effort is put into the construction of good tests, which examine not only uni-structural and multi-structural understandings, but extend this to relational, and perhaps even extended abstract levels of understanding.

Objective assessment and Graduate Qualities

Many objective assessments aim only to measure the uni-structural and multi-structural levels of understanding of the body of knowledge of the topic (Graduate Quality 1). This level of test may be suitable in the early stages of a course or program when terms, definitions and concepts are being introduced.

When objective assessment is formative, incorporating feedback, it allows students to critically evaluate their own understanding of an area and determine what is required to meet the desired learning outcomes, thus assisting in the development of Graduate Quality 2. UniSAnet online quizzes can be used to create flexible and student-centred learning environments.

Objective assessments can include more complex types (eg context dependent questions) that can be used to evaluate students’ problem solving abilities, thus developing Graduate Quality 3. For example, they may require analysis of case studies and scenarios that incorporate images, graphs, or tables of data.

Writing and using objective assessments

General

▪ Focus on the desired student outcomes.

▪ Make sure that the standard of the question is appropriate to the group being tested.

▪ Make objective assessments only a portion of the total student assessment.

▪ For formative quizzes embed feedback for correct and incorrect answer options.

▪ If quizzes are being used for summative assessment, prepare a trial formative quiz for students to practise on.

▪ Use smaller, frequent quizzes and tests instead of a single giant one.

▪ Write several banks of equivalent questions and rotate their use in examinations.

▪ If writing questions that are requiring higher order skills (e.g. problem solving) check the mark allocation to the question. If you wish to award extra marks for this type of question then this information would need to be available to students on the examination paper.

▪ Remember to include instructions to candidates on the examination paper. Instructions should state ‘choose the best answer’ or ‘choose all those that apply’ as appropriate.

▪ Many people who have only prepared one set of objective assessment items will organise for examination papers not to leave the examination room.

▪ Some people have used Endnote to store banks of objective assessment items for easy selection and assembly into different examination papers.[v] However, if you are considering using online summative objective tests, consider using TellUs2 to build and store your question bank.[vi]

Writing question stems

▪ Include as much as possible of the item in the question, however the question should be as brief as possible.

▪ Students should know what is being asked from the stem (not from the choices).

▪ Help students see CRUCIAL words in the stem by making them all capitals or italics.

▪ Don’t lift questions from text books.

▪ Use negative questions when there are more right answers than wrong.

▪ Avoid double, or triple negatives.

▪ Do not include 2 concepts in the same question.

▪ Be careful about grammatical clues that indicate the correct option (e.g. plural, singular, tense, a or an).

▪ You should consider using graphics to illustrate the question.

Writing answer options

▪ Avoid obviously wrong answers, all alternatives should be reasonable, with one clearly better than the others

▪ Ensure that answer options all relate to the question in some way

▪ Use at least 4 alternatives, unless using True/False objective tests

▪ Use ‘all of the above’ phrases sparingly

▪ Avoid using ‘always’ and ‘never’. These absolutes are giveaways!

▪ Avoid making correct answers longer or shorter than the distractor answer options

▪ Alternative answers should be grammatically consistent with the question.

▪ Vary the position of the correct answer (Note: if using UniSAnet quizzes the answer options can be automatically randomized).

Writing feedback

▪ Try to give feedback for each incorrect answer, explaining why it is incorrect, and how the error in understanding may have arisen.

▪ If possible, link feedback to areas in the program or text book that are relevant (if online, hyperlinks can be used)

▪ Include motivational feedback for correct answer selections.

Marking objective assessments

One of the benefits of objective assessment is that the marking of student work can be streamlined in various ways.

▪ Non-academics can be employed to assist with the marking of paper-based objective assessments

▪ Computers can be used to scan and mark paper-based objective tests

▪ Online objective assessments can be delivered, marked and analysed using computers.

Another related Teaching Guide, Assessment: Computer assisted assessment, covers this area in greater detail.

Evaluation of objective assessments

It is possible to evaluate objective assessments across the whole test (frequency histograms of students’ scores and determining average scores) and item by item (difficulty index and discrimination index), to determine if the questions are pitched at the appropriate level or are misleading, requiring revision. This Guide will expand on the topic of individual item evaluation as these parameters are not as commonly available as frequency histograms and averages. A complete example of analysis of objective assessment data is available.[vii]

The difficulty index

This parameter allows questions to be ranked in order of difficulty.

|Difficulty |= |number of students giving the correct answer for the item |

| | |the number of students taking the test |

Ranges from 0.0 to 1.0 (recommend that questions that are 0.90 be revised or discarded).

The discrimination index

This parameter allows you to determine how well each item was able to discriminate the abilities of students. Various formulas are available, as are ways of determining the proportion of students that define the ‘upper’ and ‘lower’ groups.

|Discrimination Index |= |number of students who gave the correct | |number of students who gave the correct |

| | |answer for the item and were in the upper |_ |answer for the item and were in the lower|

| | |group | |group |

| | |number of students in one of the groups |

Ranges from –1.0 to +1.0, (recommend that questions that are ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download