CNM Handbook for Outcomes Assessment



Contents

Contents 1

Overview 1

Purpose of this Handbook 1

Why CNM Conducts Formal Learning Outcomes Assessment 1

What Constitutes a Good Institutional Assessment Process? 2

Table 1: Institutional Effectiveness Goals 2

Introduction to the CNM Student Outcomes Assessment Process 3

The Greater Context of CNM Learning Outcomes Assessment 3

Figure 1: CNM Student Learning Outcomes Assessment Context Map 4

Table 2: Matrix for Continual Improvement in Student Learning Outcomes 5

Figure 2: Alignment of Student Learning Outcomes to CNM Mission, Vision, and Values 6

Course SLOs 6

Program and Discipline SLOs 6

General Education SLOs 7

Connections between Accreditation & Assessment 10

Institutional Accreditation 10

Program Accreditation 10

Developing Student Learning Outcome Statements 11

Table 3: Sample Program SLOs 11

Taxonomies and Models 12

Figure 3: Bloom’s Taxonomy of the Cognitive Domain 12

Figure 4: Bloom’s Taxonomies of the Affective and Psychomotor Domains 12

Table 4: Action Verbs for Creating Learning Outcome Statements 13

Figure 5: Webb’s Depth of Knowledge Model 14

Table 5: Marzano’s Taxonomy 15

Rewriting Compounded Outcomes 17

Designing Assessment Projects 18

Why Measurement Matters 18

Assessment Cycle Plans 18

Alignment of Course and Program SLOs 20

Figure 6: The CNM Assessment Process 20

Table 6: A Model for SLO Mapping 21

Table 7: Sample Rubric for Innovative Thinking in the Transformation of Ideas into Art Forms 21

Developing an Assessment Focus at the Course Level 22

Planning an Assessment Approach at the Course Level 23

Table 8: Common Assessment Measures 24

Table 9: Descriptors Related to Assessment Measures 25

Table 10: Sample Likert-Scale Items 26

Using Descriptive Rubrics to Make Assessment Coherent 26

Figure 7: Using Rubrics to Pool Findings from Diverse Assessment Approaches 27

Rubric Design 27

Table 11: Sample Rubric for Assessment of Decision-Making Skill 28

Using Formative Assessment to Reinforce Learning 28

Collecting Evidence of Learning 30

Embedded Assessment 30

Classroom Assessment Techniques (CATs) 31

Collecting Evidence beyond the Classroom 34

Sampling Student Learning Outcomes 34

IRB and Classroom Research 35

Analyzing, Interpreting, and Applying Assessment Findings 35

Analyzing Findings from Individual Measures 35

Interpreting Findings 36

Applying Findings at the Course Level 36

Pooling and Applying Program Findings 37

Glossary 39

References 41

Appendix 1: CNM Assessment Cycle Plan Form 42

Appendix 2: Guide for Completion of Cycle Plan 43

Appendix 3: Opt-In Cycle Plan Feedback Rubric 44

Appendix 4: CNM Annual Assessment Report Form 45

Appendix 5: Guide for Completion of Assessment Report 46

Appendix 6: Opt-In Assessment Report Feedback Rubric 47

Appendix 7: Rubric for Assessing Learning Outcomes Assessment Processes 48

Appendix 8: NMHED General Education Core Course Transfer Module Competencies 49

Overview

Purpose of this Handbook

This handbook was designed to assist CNM faculty and program leaders in assessing student learning outcomes and applying the findings to optimize student learning.

Why CNM Conducts Formal Learning Outcomes Assessment

CNM is dedicated to ensuring that all academic courses and curricula meet the highest level of relevancy and excellence. Thus, we are collectively committed to conducting ongoing, systematic assessment of student learning outcomes across all areas of study. CNM’s assessment processes inform decisions at course, program, and institutional levels. The resulting evidence-based changes help ensure that the education CNM students receive remains first-rate and up-to-date.

Assessment of student learning progress toward achievement of expected course outcomes is a natural part of the instructional process. However, the results of such assessment, in the absence of a formal, structured assessment process, may or may not factor into decisions for change. Having an intentional, shared approach that connects in-course assessment to broader program outcomes, documents and applies the findings, and follows up on the impact of changes benefits the college, programs, instructors, and students.

As a publicly funded institution, CNM has a degree of responsibility to demonstrate accountability for the use of taxpayer funds. As an educational institution, CNM needs to be able to demonstrate achievement of the learning outcomes it claims to strive for or, if failing to do so, the ability to change accordingly. As an institution accredited by the Higher Learning Commission (HLC), CNM must be able to demonstrate that it regularly assesses student outcomes and systematically applies the information obtained from that assessment to continually improve student learning. Accreditation not only confirms institutional integrity, but it also enables CNM to offer students Financial Aid, courses that transfer readily to other institutions, and degrees that are recognized by employers as valid. Most importantly, however, as an institution that strives to be its best, CNM benefits from the ability of its faculty and administrators to make well-informed decisions.

Programs are improved through genuine appraisal of student learning when that appraisal is used to implement well-considered enhancements. Assessment can help perpetuate successful practices, address obstacles to students’ success, and encourage innovative strategies. Often, when a program’s faculty begins to develop assessment methodologies related to program outcomes, the outcome statements themselves get refined and better defined, and the program components become more coherently integrated. Evidence obtained through assessment can also substantiate programs’ requests for external support, such as project funding, student services, professional development, etc.

For faculty, assessment often leads to ideas for innovative instructional approaches and/or better coordination of program efforts. Connecting classroom assessment to program assessment helps instructors clarify how their teaching contributes to program outcomes and complements the instructional activities of their colleagues. Active faculty engagement in program assessment develops breadth of perspective, which in turn facilitates greater intentionality in classroom instruction, clearer communication of expectations, and more objective evaluation of students’ progress.

Moreover, faculty who focus on observing and improving student learning, as opposed to observing and improving teaching, develop greater effectiveness in helping students to change their study habits and to become more cognizant of their own and others’ thinking processes.

In addition, the CNM assessment process provides ample opportunity for instructors to receive recognition for successes, mentor their colleagues, assume leadership roles, and provide valuable college service.

Ultimately, students are the greatest benefactors of the CNM assessment process. Students receive a more coherent education when their courses are delivered by a faculty that keeps program goals in mind, is attentive to students’ learning needs, and is open to opportunities for improvement. Participation in assessment, particularly when instructors discuss the process in class, helps students become more aware of the strengths and weaknesses in their own approach to learning. In turn, students are able to better understand how to maximize their study efforts; assume responsibility for their own learning; and become independent, lifelong learners. And, students benefit from receiving continually improved instruction through top-notch programs at an accredited and highly esteemed institution.

What Constitutes a Good Institutional Assessment Process?

The State University of New York’s Council on Assessment developed a rubric that does a good job of describing best practices in assessment (2015). The goals identified in the SUNY rubric, which are consistent with what institutional accreditation review teams tend to look for, are listed in Table 1. The rubric is available at .

|Table 1: Institutional Effectiveness Goals |

|Aspect |Element |Goal |

|Design |Plan |The institution has a formal assessment plan that documents an organized, sustained |

| | |assessment process covering all major administrative units, student support |

| | |services, and academic programs. |

| |Outcomes |Measurable outcomes have been articulated for the institution as a whole and within |

| | |functional areas/units, including for courses and programs and nonacademic units. |

| |Alignment |More specific subordinate outcomes (e.g., course) are aligned with broader, |

| | |higher-level outcomes (e.g., programs) within units and these are aligned with the |

| | |institutional mission, goals, and values. |

|Implementation |Resources |Financial, human, technical, and/or physical resources are adequate to support |

| | |assessment. |

| |Culture |All members of the faculty and staff are involved in assessment activities. |

| |Data Focus |Data from multiple sources and measures are considered in assessment. |

| |Sustainability |Assessment is conducted regularly, consistently, and in a manner that is sustainable|

| | |over the long term. |

| |Monitoring |Mechanisms are in place to systematically monitor the implementation of the |

| | |assessment plan. |

|Impact |Communication |Assessment results are readily available to all parties with an interest in them. |

| |Strategic Planning and |Assessment data are routinely considered in strategic planning and budgeting. |

| |Budgeting | |

| |Closing the Loop |Assessment data have been used for institutional improvement. |

|Source: The SUNY Council on Assessment |

Introduction to the CNM Student Outcomes Assessment Process

CNM’s faculty-led Student Academic Assessment Committee (SAAC) drives program assessment. Each of CNM’s six schools has two voting faculty representatives on the committee who bring their school’s perspectives to the table and help coordinate the school’s assessment reporting. In addition, SAAC includes four ex officio members, one each from the College Curriculum Committee (CCC), the Cooperative for Teaching and Learning (CTL), the Deans Council, and the Office of Planning and Institutional Effectiveness (OPIE). The latter is the Senior Director of Outcomes and Assessment, whose role is facilitative.

SAAC has two primary responsibilities: 1) providing a consistent process for documenting and reporting outcomes results and actions taken as a result of assessment, and 2) facilitating a periodic review of the learning outcomes associated with the CNM General Education Core Curriculum.

SAAC faculty representatives have put in place a learning assessment process that provides consistency and facilitates ongoing improvement while respecting disciplinary differences, faculty expertise, and academic freedom. This process calls for all certificate and degree programs, general education areas, and discipline areas to participate in what is referred to for the sake of brevity as program assessment. The goal is assessment of student learning within programs, not assessment of programs themselves (a subtle but important distinction).

The faculty of each program/area develops and maintains its own assessment cycle plan, outlining when and how all of the program’s student learning outcomes (SLOs) will be assessed over the course of five years. SAAC asks that the cycle plan, which can be revised as needed, address at least one SLO per year. And, SAAC strongly recommends assessing each SLO for two consecutive years so that changes can be made on the basis of the first year’s findings, and the impact of those changes can be assessed during the second year. At the end of the five-year cycle, a new 5-year assessment cycle plan is due.

Program faculty may use any combination of course-level and/or program-level assessment approaches they deem appropriate to evaluate students’ achievement of the program-level learning outcomes. For breadth and depth, multiple approaches involving multiple measures are encouraged.

A separate annual assessment reporting form (see Appendix 3) is used to summarize the prior year’s assessment findings and describe changes planned on the basis of the findings. This form can be prepared by any designated representative within the school. Ideally, findings from multiple measures in multiple courses are collaboratively interpreted by the program faculty in a group meeting prior to completion of the report.

For public access, SAAC posts assessment reports, meeting minutes and other information at . For access by full-time faculty, SAAC posts internal documents in a Learning Assessment folder on a K drive.

In addition, the Senior Director of Outcomes and Assessment publishes a monthly faculty e-newsletter called The Loop, offers faculty workshops, and is available to assist individual faculty members and/or programs with their specific assessment needs.

The Greater Context of CNM Learning Outcomes Assessment

CNM learning outcomes assessment (a.k.a., program assessment) does not operate in isolation. The diagram in Figure 1 illustrates the placement of program assessment among interrelated processes that together inform institutional planning and budgeting decisions in support of improved student outcomes.

Figure 1: CNM Student Learning Outcomes Assessment Context Map

[pic]

The CNM General Education Core Curriculum is a group of broad categories of educational focus, called “areas” (such as Communications), each with an associated set of student learning outcomes and a list of courses that address those outcomes. For example, Mathematics is one of the CNM Gen Ed areas. “Use and solve various kinds of equations” is one of the Mathematics student learning outcomes. And, MATH 1315, College Algebra, is a course that may be taken to meet the CNM Mathematics Gen Ed requirement.

Similarly, the New Mexico General Education Core Course Transfer Curriculum is a group of areas, each with an associated set of student learning outcomes – which the New Mexico Higher Education Department (NMHED) refers to as “competencies” – and a list of courses that address those outcomes. CNM currently has over 120 of its own Gen Ed courses included in the State’s transfer core. This is highly beneficial for CNM’s students because these courses are guaranteed to transfer between New Mexico postsecondary public institutions. As part of the agreement to have these courses included in the transfer core, CNM is responsible for continuous assessment in these courses and annual reporting to verify that students achieve the State’s competencies. See the General Education SLOs section below for more information on general education learning outcomes.

Specific course outcomes within the programs and disciplines provide the basis for program and discipline outcomes assessment. Program statistics (enrollment and completion rates, next-level outcomes, etc.) also inform program-level assessment. Learning assessment, in turn, informs instructional approaches, curriculum development/revision, planning, and budgeting.

While learning outcomes assessment is separate and distinct from program review, assessment does inform the program review process through its influence on programming, curricular, instructional, and funding decisions. Program review is a viability study that looks primarily at program statistics (such as enrollment, retention, and graduation rates) and curricular and job market factors. By keeping the focus of assessment on student learning rather than demonstration of program effectiveness, the indirect, informative relationship between program-level learning outcomes assessment and program viability studies enables CNM faculty to apply assessment in an unguarded manner, to explore uncertain terrain, and to acknowledge findings openly. Keeping the primary focus of assessment on improvement, versus program security, helps to foster an ethos of inquiry, scholarly analysis, and civil academic discourse around assessment.

Along with program assessment, a variety of other assessment processes contribute synergistically to ongoing improvement in student learning outcomes at CNM. Table 2 describes some of the key assessment processes that occur regularly, helping to foster the kind of institutional excellence that benefits CNM students.

Table 2: Matrix for Continual Improvement in Student Learning Outcomes

[pic]

For a larger, more legible version of the Table 2, see the document “Ongoing Improvement Matrix” at K:/Learning Assessment/Assessment Process.

As illustrated in Figure 2 on the following page, outcomes assessment at CNM is integrally aligned to the college’s mission, vision, and values. CNM’s mission to “Be a leader in education and training” points the way, and course instruction provides the foundation for realization of that vision.

In the process of achieving course learning outcomes, students develop competencies specific to their programs of study. Degree-seeking students also develop general education competencies. The CNM assessment reporting process focuses on program-level and Gen Ed learning outcomes, as informed by course-level assessment findings and, where appropriate, program-level assessment findings.

Figure 2: Alignment of Student Learning Outcomes to CNM Mission, Vision, and Values

[pic]

Course SLOs

At CNM, student learning outcome statements (which describe what competencies should be achieved upon completion of instruction) and the student learning outcomes themselves (what is actually achieved) are both referred to as SLOs. Most CNM courses have student learning outcome statements listed in the college’s curriculum management system. (Exceptions include special topics offerings and individualized courses.) The course outcome statements are directly aligned to program outcome statements, general education outcome statements, and/or discipline outcome statements. While course-specific student learning outcomes are the focus of course-level assessment, the alignments to overarching outcome statements make it possible to apply course-level findings in assessment of the overarching outcomes (i.e., program, Gen Ed, or discipline outcomes).

Program and Discipline SLOs

At CNM, the word program usually refers to areas of study that culminate in degrees (AA, AS, or AAS) or certificates. The designation of discipline is typically reserved for areas of study that do not lead to degrees or certificates, such as the College Success Experience, Developmental Education, English as a Second Language, and GED Preparation. Discipline, however, does not refer to general education, which has its own designation, with component general education areas. Note, however, that the word program is frequently used as short-hand to refer to degree, certificate, general education, and discipline areas as a group – as in program assessment.

Each program and discipline has student learning outcome statements (SLOs) that represent the culmination of the component course SLOs. Program and discipline SLOs are periodically reviewed through program assessment and/or, where relevant, through program accreditation review processes and updated as needed.

General Education SLOs

With the exception of Computer Literacy, each distribution area of the CNM General Education Core Curriculum has associated with it a group of courses from which students typically can choose (though some programs require specific Gen Ed courses). Computer Literacy has only one course option (though some programs waive that course under agreements to develop the related SLOs through program instruction). Each of the courses within a Gen Ed area is expected to develop all of that area’s SLOs. And, each Gen Ed course should be included at least once in the area’s 5-year assessment cycle plan.

The vast majority of CNM’s general education courses have been approved by the New Mexico Higher Education Department (NMHED) for inclusion in the New Mexico Core Course Transfer Curriculum (a.k.a., the State’s transfer core). Inclusion in this core guarantees transfer toward general education requirements at any other public institution of higher education in New Mexico. To obtain and maintain inclusion in the State’s transfer core, CNM agrees to assess and report on student achievement of associated competencies in each course (See Appendix 6). However, because the NMHED permits institutions to apply their own internal assessment schedules toward the State-level reporting, it is not necessary that CNM report assessment findings every year for every course in the State’s transfer core. CNM Gen Ed areas may apply the CNM 5-year assessment cycle plan, so long as every area course is included in the assessment reporting process at some point during the 5-year cycle.

During the 2014-15 academic year, CNM conducted a review of its own General Education Core Curriculum and decided to adopt the competencies from the State’s General Education Core Course Transfer Curriculum as its own for the areas of Communications, Mathematics, Laboratory Science, Social and Behavioral Sciences, and Humanities and Fine Arts. CNM opted to retain the additional Gen Ed area of Computer Literacy. The former areas of Critical Thinking and Life Skills/Teamwork, which were embedded in program reporting but were not directly assessed, were removed from the CNM Gen Ed core. These changes are in effect for the 2016-2018 catalog, and for reporting of 2014-15 assessment findings in October of 2016, reporters have been given the option of using either the former student learning outcomes or the new outcomes.

Shortly after CNM made the decision to adopt the State’s competencies, NMHED Secretary Damron announced plans to revise the State’s transfer core. Those revisions are expected to be decided upon by October of 2016. Normally, SAAC would conduct a review of CNM’s Gen Ed outcomes once every six years, with the next review due in 2021. However, due to the revision of the State’s transfer core, SAAC plans to conduct another review of the CNM Gen Ed outcomes in 2017 for the 2018-2020 catalog.

As noted, general education assessment cycle plans should include assessment within all area courses that serve as Gen Ed options. Because this can mean conducting assessment in a significant number of courses, it is recommended that sampling techniques be employed to collect evidence that is representative without overburdening the faculty. See Collecting Evidence of Learning for further information on sampling techniques.

An alternative Gen Ed reporting form that uses the NMHED format and allows input by individual instructors is available in an Access database at K:/Learning Assessment/NMHED Reporting Database.

CNM’s General Education Core Curriculum has two separate but related sets of areas and outcomes: one for Associate of Arts (AA) and Associate of Science (AS) degrees, and one for Associate of Applied Science (AAS) degrees. For AA and AS degrees, 35 credits are required; whereas, for the AAS degrees, 15 credits are required. Following are the current student learning outcome statements associated with each area:

CNM General Education SLOs for AA and AS Degrees

Communications

1. Analyze and evaluate oral and written communication in terms of situation, audience, purpose, aesthetics, and diverse points of view.

2. Express a primary purpose in a compelling statement and order supporting points logically and convincingly.

3. Use effective rhetorical strategies to persuade, inform, and engage.

4. Employ writing and/or speaking processes such as planning, collaborating, organizing, composing, revising, and editing to create presentations using correct diction, syntax, grammar, and mechanics.

5. Integrate research correctly and ethically from credible sources to support the primary purpose of a communication.

6. Engage in reasoned civic discourse while recognizing the distinctions among opinions, fact, and inferences.

Mathematics

1. Construct and analyze graphs and/or data sets.

2. Use and solve various kinds of equations.

3. Understand and write mathematical explanations using appropriate definitions and symbols.

4. Demonstrate problem solving skills within the context of mathematical applications.

Lab Sciences

1. Describe the process of scientific inquiry.

2. Solve problems scientifically.

3. Communicate scientific information.

4. Apply quantitative analysis to scientific problems.

5. Apply scientific thinking to real world problems.

Social/Behavioral Sciences

1. Identify, describe and explain human behaviors and how they are influenced by social structures, institutions, and processes within the contexts of complex and diverse communities.

2. Articulate how beliefs, assumptions, and values are influenced by factors such as politics, geography, economics, culture, biology, history, and social institutions.

3. Describe ongoing reciprocal interactions among self, society, and the environment.

4. Apply the knowledge base of the social and behavioral sciences to identify, describe, explain, and critically evaluate relevant issues, ethical dilemmas, and arguments.

Humanities & Fine Arts

1. Analyze and critically interpret significant primary texts and/or works of art (this includes fine art, literature, music, theatre, & film).

2. Compare art forms, modes of thought and expression, and processes across a range of historical periods and/or structures (such as political, geographic, economic, social, cultural, religious, and intellectual).

3. Recognize and articulate the diversity of human experience across a range of historical periods and/or cultural perspectives.

4. Draw on historical and/or cultural perspectives to evaluate any or all of the following: contemporary problems/issues, contemporary modes of expression, and contemporary thought.

Computer Literacy

1. Demonstrate knowledge of basic computer technology and tools

2. Use software applications to produce, format, analyze and report information to communicate and/or to solve problems

3. Use internet tools to effectively acquire desired information

4. Demonstrate the ability to create and use various forms of electronic communication adhering to contextually appropriate etiquette

5. Demonstrate the ability to create, name, organize, save and retrieve data and/or information in an electronic file management system

CNM General Education SLOs for AAS Degrees

Communications

1. Analyze and evaluate oral and written communication in terms of situation, audience, purpose, aesthetics, and diverse points of view.

2. Express a primary purpose in a compelling statement and order supporting points logically and convincingly.

3. Integrate research correctly and ethically from credible sources to support the primary purpose of a communication.

4. Engage in reasoned civic discourse while recognizing the distinctions among opinions, fact, and inferences.

Mathematics

1. Use and solve various kinds of equations.

2. Understand and write mathematical explanations using appropriate definitions and symbols.

3. Demonstrate problem solving skills within the context of mathematical applications.

Human Relations

1. Describe how the socio-cultural context affects behavior and how behavior affects the socio-cultural context

2. Identify how individual perspectives and predispositions impact others in social, workplace and global settings

Computer Literacy

1. Demonstrate knowledge of basic computer technology and tools

2. Use software applications to produce, format, analyze and report information to communicate and/or to solve problems

3. Use internet tools to effectively acquire desired information

4. Demonstrate the ability to create and use various forms of electronic communication adhering to contextually appropriate etiquette

5. Demonstrate the ability to create, name, organize, save and retrieve data and/or information in an electronic file management system

Connections between Accreditation & Assessment

Institutional Accreditation

CNM is accredited by the Higher Learning Commission (HLC), one of six regional institutional accreditors in the U.S. Information regarding the accreditation processes and criteria is available at . HLC accreditation criteria that have particular relevance to assessment of student outcomes are listed below:

3.A. The institution’s degree programs are appropriate to higher education.

2. The institution articulates and differentiates learning goals for its undergraduate, graduate, post-baccalaureate, post-graduate, and certificate programs.

3. The institution’s program quality and learning goals are consistent across all modes of delivery and all locations (on the main campus, at additional locations, by distance delivery, as dual credit, through contractual or consortial arrangements, or any other modality).

3.B. The institution demonstrates that the exercise of intellectual inquiry and the acquisition, application, and integration of broad learning and skills are integral to its educational programs.

2. The institution articulates the purposes, content, and intended learning outcomes of its undergraduate general education requirements. The program of general education is grounded in a philosophy or framework developed by the institution or adopted from an established framework. It imparts broad knowledge and intellectual concepts to students and develops skills and attitudes that the institution believes every college-educated person should possess.

3. Every degree program offered by the institution engages students in collecting, analyzing, and communicating information; in mastering modes of inquiry or creative work; and in developing skills adaptable to changing environments.

3.E. The institution fulfills the claims it makes for an enriched educational environment.

2. The institution demonstrates any claims it makes about contributions to its students’ educational experience by virtue of aspects of its mission, such as research, community engagement, service learning, religious or spiritual purpose, and economic development.

4.B. The institution demonstrates a commitment to educational achievement and improvement through ongoing assessment of student learning.

1. The institution has clearly stated goals for student learning and effective processes for assessment of student learning and achievement of learning goals.

2. The institution assesses achievement of the learning outcomes that it claims for its curricular and co-curricular programs.

3. The institution uses the information gained from assessment to improve student learning.

4. The institution’s processes and methodologies to assess student learning reflect good practice, including the substantial participation of faculty and other instructional staff members.

5.C. The institution engages in systematic and integrated planning.

2. The institution links its processes for assessment of student learning, evaluation of operations, planning, and budgeting.

Program Accreditation

Many of CNM’s technical and professional programs are accredited by field-specific accreditation bodies. Maintaining program accreditation contributes to program quality by ensuring that instructional practices reflect current best practices and industry standards. Program accreditation is essentially a certification of instructional competency and degree credibility. Because in-depth program assessment is typically a major component of accreditation reporting, the CNM assessment report accommodates carry-over of assessment summaries from accreditation reports. In other words, reporters are encouraged to minimize unnecessary duplication of reporting efforts by copying and pasting write-ups done for accreditation purposes into the corresponding sections of the CNM assessment report form.

Developing Student Learning Outcome Statements

Student learning outcome statements (SLOs) identify the primary competencies the student should able to demonstrate once the learning has been achieved. SLOs derive from and reflect your teaching goals, so it is important to start with clearly articulated teaching goals. What do you want to accomplish? From that, you can develop SLOs that identify what your students need to learn to do. To be used for assessment, SLO statements need to measurable. For this reason, the current convention is to use phrases that begin with action verbs. Each phrase completes an introductory statement that includes “the student will be able to.” For example:

Table 3: Sample Program SLOs

|Upon completion of this program, the student will be able to: |

|Explain the central importance of metabolic pathways in cellular function. |

|Use mathematical methods to model biological systems. |

|Integrate concepts drawn from both cellular and organismal biology with explanations of evolutionary adaptation. |

|Develop experimental models that support theoretical concepts. |

|Perform laboratory observations using appropriate instruments. |

|Critique alternative explanations of biological phenomena with regard to evidence and scientific reasoning. |

At the course level as well as the program level, SLOs should focus on competencies that are applicable beyond college. Rather than address specific topics or activities in which the students will be expected to engage, identify the take-aways (up to 10 maximum) that will help students succeed later in employment and/or in life.

Tips:

• Focus on take-away competencies, not just activities or scores.

• Focus on student behavior/action versus mental or affective processes.

• Choose verbs that reflect the desired level of sophistication.

• Avoid compound components unless their synthesis is needed.

The verbs used in outcome statements carry important messages about the type of skill expected. Therefore, much emphasis in SLO writing is placed upon selection of action verbs.

Cognitive processes such as understanding and affective processes such as appreciation are difficult to measure. So, if your goal is to have students understand something, ask yourself how they can demonstrate that understanding. Will they explain it, paraphrase it, select it, or do something else? Is understanding all you want, or do you also want students to apply their understanding in some way?

Similarly, if your goal is to develop student appreciation for something, how will students demonstrate that appreciation? They could tell you they appreciate it, but how would you know they weren’t just saying what they thought you wanted to hear? Perhaps if they were to describe it, defend it, contrast it with something else, critique it, or use it to create something new, then you might be better able to conclude that they appreciate it.

Taxonomies and Models

Benjamin Bloom’s Taxonomy of the Cognitive Domain is often used as a guide for writing SLOs. Typically depicted in a layered triangle, many consider the taxonomy to represent a progression in sophistication of skills, with foundational skills at the bottom and “higher level” skills at the top. However, the model also suggests that mastery of the foundational skills is necessary in order to reach the higher rungs. Being considered foundational, therefore, does not mean a skill level is not important.

Bloom originally presented his model in 1956 (Taxonomy). A former student of Bloom’s, Lorin Anderson, along with others, revised the taxonomy in 2000, flipping the top two levels and substituting verbs for Bloom’s nouns (A Taxonomy). A side-by-side comparison is provided in Figure 3. Both versions are commonly called “Bloom’s Taxonomy.”

It is worth noting that Benjamin Bloom also developed taxonomies for the affective and psychomotor domains, and Anderson et al also developed a revised version of that. The two are shown side-by-side in Figure 4.

As mentioned, Bloom’s taxonomies imply that the ability to perform the behaviors at the top depends upon having attained prerequisite knowledge and skills through the behaviors in the lower rungs. Not all agree with this concept, however, and the taxonomies have been used by many to discourage instruction directed at the lower-level skills.

Despite debate over their validity, Bloom’s taxonomies can be useful references in selecting verbs for outcome statements, describing stages of outcome development, identifying appropriate assessment methods, and interpreting assessment information.

Building on Bloom’s revised taxonomy, Table 4 on the following page presents a handy reference that was created and shared by the Institutional Research Office at Oregon State University. The document lists for SLO writers a variety of action verbs corresponding to each level of the cognitive domain taxonomy.

You may find it useful to think of Bloom’s revised taxonomy as being comprised of cognitive processes and the associated outcome verbs in Table 4 as representing more observable, measurable behaviors.

Table 4: Action Verbs for Creating Learning Outcome Statements

[pic]

Other models have been developed as alternatives to Bloom’s taxonomies. While all such conceptual models have their shortcomings, the schema they present may prove useful in the selection of appropriate SLO verbs.

For example, Norman Webb’s Depth of Knowledge, first published in 2002 and later illustrated as shown in Figure 5 (from te@chthought), identifies four knowledge levels: Recall, Skills, Strategic Thinking, and Extended Thinking (Depth-of-Knowledge). Webb’s model is widely referenced in relation to the Common Core Standards Initiative for kindergarten through 12th grade.

Figure 5: Webb’s Depth of Knowledge Model

[pic]

In 2008, Robert Marzano and John Kendall published a “new taxonomy” that reframed Bloom’s domains as Information, Mental Processes, and Psychomotor Procedures and described six levels of processing information: Retrieval, Comprehension, Analysis, Knowledge Utilisation, Meta-Cognitive System, and Self System (Designing & Assessing Educational Objectives). In Marzano’s Taxonomy, shown in Table 5, the four lower levels are itemized with associated verbs. This images was shared by the Adams County School District of Colorado (wiki.adams):

Table 5: Marzano’s Taxonomy

In a recent publication, Clifford Adelman of the Institute for Higher Education Policy advocated for the use of operational verbs in outcome statements, defining these as “actions that are directly observed in external contexts and subject to judgment” (2015). The following reference is derived from the article.

For effective learning outcomes, select verbs that:

1. Describe student acquisition and preparation of tools, materials, and texts of various types

access, acquire, collect, accumulate, extract, gather, locate, obtain, retrieve

2. Indicate what students do to certify information, materials, texts, etc.

cite, document, record, reference, source (v)

3. Indicate the modes of student characterization of the objects of knowledge or materials of production, performance, exhibit

categorize, classify, define, describe, determine, frame, identify, prioritize, specify

4. Describe what students do in processing data and allied information

calculate, determine, estimate, manipulate, measure, solve, test

5. Describe the ways in which students format data, information, materials

arrange, assemble, collate, organize, sort

6. Describe what students do in explaining a position, creation, set of observations, or a text

articulate, clarify, explicate, illustrate, interpret, outline, translate, elaborate, elucidate

7. Fall under the cognitive activities we group under “analyze”

compare, contrast, differentiate, distinguish, formulate, map, match, equate

8. Describe what students do when they “inquire”

examine, experiment, explore, hypothesize, investigate, research, test

9. Describe what students do when they combine ideas, materials, observations

assimilate, consolidate, merge, connect, integrate, link, synthesize, summarize

10. Describe what students do in various forms of “making”

build, compose, construct, craft, create, design, develop, generate, model, shape, simulate

11. Describe the various ways in which students utilize the materials of learning

apply, carry out, conduct, demonstrate, employ, implement, perform, produce, use

12. Describe various executive functions students perform

operate, administer, control, coordinate, engage, lead, maintain, manage, navigate, optimize, plan

13. Describe forms of deliberative activity in which students engage

argue, challenge, debate, defend, justify, resolve, dispute, advocate, persuade

14. Indicate how students valuate objects, experiences, texts, productions, etc.

audit, appraise, assess, evaluate, judge, rank

15. Reference the types of communication in which we ask students to engage:

report, edit, encode/decode, pantomime (v), map, display, draw/ diagram

16. Indicate what students do in groups, related to modes of communication

collaborate, contribute, negotiate, feed back

17. Describe what students do in rethinking or reconstructing

accommodate, adapt, adjust, improve, modify, refine, reflect, review

Rewriting Compounded Outcomes

Often in the process of developing assessment plans, people realize their outcome statements are not quite as clear as they could be. One common discovery is that the outcome statement actually describes more than one outcome. While there is no rule against compounding multiple outcomes into one statement, doing so provides less clarity for students regarding their performance expectations and makes assessing the outcomes more complicated. Therefore, if compounded outcomes actually represent separate behaviors, it may be preferable to either rewrite them or create separate statements to independently represent the desired behaviors.

If a higher-level behavior/skill essentially subsumes the others, the lower-level functions can be dropped from the statement. This is a good revision option if a student would have to demonstrate the foundational skill(s) in order to achieve the higher level of performance. For example, compare the following:

• Identify, evaluate, apply, and correctly reference external resources.

• Use external resources appropriately.

To use external resources appropriately, a student must identify potential resources, evaluate them for relevance and reliability, apply them, and correctly reference them. Therefore, the second statement is more direct and inclusive than the first. One could reasonably argue that the more detailed, step-by-step description communicates more explicitely to students what they must do, but the second statement requires a more holistic integration of the steps, communicating the expectation that the steps will be synthesized into an outcome that is more significant than the sum of its components. Synthesis (especially when it involves evaluation) is a high order of cognitive function.

To identify compound outcomes, look for structures such as items listed in a series and/or coordinating conjunctions. Let’s look at two examples with the compound structures in bold and the coordinating conjunctions underlined:

• Integrate concepts drawn from both cellular and organismal biology with explanations of evolutionary adaptation.

• Use scientific reasoning to develop hypotheses and evaluate contradictory explanations of social phenomena.

In the first example above, the behavior called for is singular but requires that the student draw upon two different concepts simultaneously to make sense of a third. This is a fairly sophisticated cognitive behavior involving synthesis. The second example above describes two separate behaviors, both of which involve scientific reasoning. One way to decide whether to split apart such statements is to consider whether both components could be assessed together. In the first example above, assessment not only could be done using a single demonstration of proficiency but probably should be done this way. For the second example, however, assessment would require looking at two different outcomes separately. Therefore, that statement might be better rewritten as two:

• Develop hypotheses based in scientific reasoning.

• Evaluate contradictory explanations of social phenomena through reasoned, scientific analysis.

Designing Assessment Projects

Why Measurement Matters

Assessment projects have two primary purposes:

1. To gauge student progress toward specific outcomes

2. To provide insight regarding ways to facilitate the learning process

Measurement approaches that provide summative success data (such as percentages of students passing an exam or grade distributions) may be fine if your aim is to demonstrate achievement of a certain acceptable threshold of proficiency. However, such measures alone often fail to provide useful insight into what happened along the way that either helped or impeded the learning process. In the absence of such insights, assessment reporting can become a stale routine of fulfilling a responsibility – something to be completed as quickly and painlessly as possible. At its worst, assessment is viewed as 1) an evaluation of faculty teaching that presumes there’s something wrong and it needs to be fixed, and 2) a bothersome process of jumping through hoops to satisfy external demands for accountability. However, when assessment is focused on student learning rather than on instruction, the presumption is not that there’s something wrong with the teaching, but rather that there’s always room for improvement in student achievement of desired learning outcomes.

When faculty embrace assessment as a tool to get information about student learning dynamics, they are more likely to select measurement approaches that yield information about how the students learn, where and why gaps in learning occur, and how students respond to different teaching approaches. Faculty can then apply this information toward helping students make their learning more efficient and effective.

So, if you are already assessing student learning in your class, what is to be gained from putting that assessment through a formal reporting process? For one thing, the purposeful application of classroom assessment techniques with the intention of discovering something new fosters breadth of perspective that can otherwise be difficult to maintain. The process makes faculty more alert to students’ learning dynamics, inspires new instructional approaches, and promotes a commitment to professional growth.

Often, underlying frustration with assessment processes are misunderstandings about what constitutes an acceptable assessment measure. CNM’s assessment process accommodates diverse and creative approaches. The day-to-day classroom assessment techniques that faculty already use to monitor students’ progress can serve as excellent measurement choices. The alignment of course SLOs to program SLOs makes it feasible for faculty to collaboratively apply their classroom assessment techniques toward the broader assessment of their program, even though they may all be using different measures and assessing different aspects of the program SLO. When faculty collectively strive to more deeply understand the conditions under which students in a program learn best, a broader picture of program dynamics emerges. In such a scenario, opportunities to better facilitate learning outcomes can more easily be identified and implemented, leading to greater program efficiency, effectiveness, and coherence.

Assessment Cycle Plans

The Student Academic Assessment Committee (SAAC) asks program faculty to submit plans every five years demonstrating when and how they will assess their program SLOs over the course of the next five years. For newly approved programs/general education areas/discipline areas, the following must be completed by the 15th of the following October:

• Enter student learning outcome statements (SLOs) in the college’s curriculum management system.

• Develop and submit to SAAC a 5-year assessment cycle plan (using the form provided by SAAC).

At least one outcome should be assessed each year, and all outcomes should be assessed within the 5-year cycle. SAAC requests/strongly recommends that each outcome be assessed for at least two consecutive years. Cycle plans for general education areas should include all courses listed for the discipline within the CNM General Education Core Curriculum. And, assessment within courses having multiple sections should include all sections, whenever feasible.

All programs/areas are encouraged to assess individual SLOs across a variety of settings, use a variety of assessment techniques (multiple measures), and employ sampling methods as needed to minimize the burden on the faculty. (See Collecting Evidence of Learning.)

Choosing an assessment approach begins with considering what it is the program faculty wants to know. The assessment cycle plan should be designed to collect the information needed to answer the faculty’s questions. Much educational assessment begins with one or more of the following questions:

1. Are students meeting the necessary standards? Standards-based assessment

2. How do these students compare to their peers? Benchmark comparisons

3. How much is instruction impacting student learning? Value-added assessment

4. Are changes making a difference? Investigative assessment

5. Is student learning potential being maximized? Process-oriented assessment

Determining whether students are meeting standards usually involves summative measures, such as final exams. Standards-based assessment relies on prior establishment of a target level of achievement, based on external performance standards or internal decisions about what constitutes ‘good enough.’ Outcomes are usually interpreted in terms of the percentage of students meeting expected proficiency levels.

Finding out how students compare to their peers typically involves comparing test or survey outcomes with those of other colleges. Benchmark comparisons, based on normative group data, may be used when available. Less formal comparisons may involve looking at summary data from specific institutions, programs, or groups.

Exploring how an instructional program or course is impacting student learning, often termed ‘value-added assessment,’ is about seeing how much students are learning compared to how much they knew coming in. This approach typically involves either longitudinal or cross-sectional comparisons, using the same measure for both formative and summative assessment. In longitudinal analyses, students are assessed at the beginning and end of an instructional unit or program (given a pre-test and a post-test, for example). The results can then be compared on a student-by-student basis for calculation of mean gains. In cross-sectional analyses, different groups of students are given the same assessment at different points in the educational process. For example, entering and exiting students are asked to fill out the same questionnaire or take the same test. Mean results of the two groups are then compared. Cross-sectional comparisons can save time because the formative and summative assessments can be conducted concurrently. However, they can be complicated by demographic and/or experiential differences between the groups.

Studying the effects of instructional and/or curricular changes usually involves comparing results obtained following a change to those obtained with the same measure but different students prior to the change. This is essentially an experimental approach, though it may be relatively informal. (See the CNM Handbook for Educational Assessment.)

Exploring whether student learning is being maximized involves process-oriented assessment strategies. To delve into the dynamics of the educational process, a variety of measures may be used at formative and/or summative stages. The goal is to gain insights into where and why student learning is successful and where and why it breaks down.

The five assessment questions presented above for program consideration can also be used independently by faculty at the course level. Because it is built upon the alignment of course and program SLOs, as discussed in the following section, the CNM assessment model allows faculty to use different approaches within their own classes based on what they want to know. Putting together findings from varied approaches to conduct a program-level analysis, versus having everyone use the same approach, yields a more complete portrait of learning outcomes.

Alignment of Course and Program SLOs

As illustrated in Figure 5, the process of program assessment revolves around the program’s student learning outcome statements. Although some assessment techniques (such as employer surveys, alumni surveys, and licensing exams) serve as external measures of program SLOs, the internal assessment process often begins with individual instructors working with course SLOs that have been aligned to the program SLOs.

The alignment between course and program SLOs allows flexibility in how course-level assessment is approached. When faculty assess their students’ progress toward their course SLOs, they also assess their students’ progress toward the overarching program SLOs. Therefore, each instructor can use whatever classroom assessment techniques work best for his/her purposes and still contribute relevant assessment findings to a collective body of evidence. If instructors are encouraged to ask their own questions about their students’ learning and conduct their own assessment in relation to their course SLOs, not only will course-related assessment be useful for program reporting, but it will also be relevant to the individual instructor.

Depending upon how the program decides to approach assessment, faculty may be able to use whatever they are already doing to assess their students’ learning. Or, they might even consider using the assessment process as an opportunity to conduct action research. Academic freedom in assessment at the course level can be promoted by putting in place clearly articulated program-level outcomes and defining criteria associated with those outcomes. Descriptive rubrics (whether holistic or analytic), rating scales, and check lists can be useful tools for clarifying what learning achievement looks like at the program level. Once a central reference is in place, faculty can more easily relate their course-level findings to the program-level expectations.

Figure 6: The CNM Assessment Process

[pic]

Clear alignment between course and program SLOs, therefore, is prerequisite for using in-class assessment to support program assessment. Questions of assessment aside, most people would agree that having every course SLO (in courses comprising a program) contribute in some way to students’ development of at least one program SLO contributes to the overall integrity and coherence of the program. To map (a.k.a. crosswalk) course SLOs to program SLOs, consider using a matrix such as the one below. The notations in the cells where the rows and columns intersect represent existence of alignment.

Table 6: A Model for SLO Mapping

| |Program SLO #1 |Program SLO #2 |Program SLO #3 |

|Course SLO #1 | |Mastered | |

|Course SLO #2 |Reinforced | | |

|Course SLO #3 | |Introduced |Reinforced |

|Course SLO #4 | | |Mastered |

The more clearly defined (and agreed upon by program faculty) the program SLOs are, the easier it is to align course SLOs to them and the more useful the pooling of assessment findings will be. For this reason, schools are encouraged to carefully analyze their program SLOs and consider whether any of them consist of several component skills (often referred to as elements). If so, identification of course SLO alignments (and ultimately the entire assessment process) will be easier if each of the component skills is identified and separately described. Consider developing a rubric, normed rating scale, or checklist for each such program SLO, clearly representing and/or describing what achievement of the outcome looks like.

For example, the SLO “Demonstrate innovative thinking in the transformation of ideas into forms” contains two component skills: innovative thinking and transforming ideas. Each of these elements needs to be described separately before the broader learning outcome can be clearly understood and holistically assessed. Some course SLOs may align more directly to development of innovative thinking while others may align more directly to the transformation of ideas into forms.

The advantage of using a descriptive rubric for this sort of SLO deconstruction is that you can define several progressive levels of achievement (formative stages) and clearly state what the student behavior looks like at each level. Faculty can then draw connections between their own course SLOs (and assessment findings) and the stages of development associated with the program SLO.

Following is a sample rubric, adapted from the AAC&U Creative Thinking Value Rubric (available at ). See the section on Using Rubrics to Make Assessment Coherent for more information on developing and getting the most out of rubrics.

|Table 7: Sample Rubric for Innovative Thinking in the Transformation of Ideas into Art Forms |

| |Advanced |Intermediate |Beginning |

| |(3) |(2) |(1) |

|Innovative Thinking |Extends a novel or unique idea, question, format, or|Creates a novel or unique idea, |Reformulates available ideas. |

| |product to create new knowledge or a new application|question, format, or product. | |

| |of knowledge. | | |

|Transforming Ideas |Transforms ideas or solutions into entirely new |Connects ideas in novel ways that |Recognizes existing |

| |forms. |create a coherent whole. |connections among ideas or |

| | | |solutions. |

Developing an Assessment Focus at the Course Level

Assessment becomes meaningful when it meets a relevant need,

for example when it:

• Starts with a question you care about.

• Can confirm or disprove what you think.

• Can shed light on something you want to better understand.

• Can reveal whether one method works better than another.

• Can be of consequence to your future plans or those of your colleagues.

• Has implications for the future of the profession and/or broader society.

Before you can formulate a coherent course-level approach to assessment, it is necessary to connect your broad teaching goals with specific, assessable activities. Begin by asking yourself what you and your students do to frame the learning that leads to the desired outcome. Sometimes identifying specific activities that directly contribute to the outcome can be a challenge, but doing so is important for assessment to proceed.

Once you have connected specific activities with the outcome, decide what you want to know about your students’ learning. What is your goal in conducting the assessment? What are you curious about? See the five assessment questions listed in the Assessment Cycle Plans section above for ideas you can apply here as well.

A natural inclination is to focus assessment on problem areas. However, it is often more productive to focus on what you think is working than to seek to confirm what is not working. Here are some reasons why this is so:

1. Exploring the dynamics behind effective processes may offer insights that have application to problem areas. Stated another way: understanding conditions under which students learn best can help you identify obstacles to student learning and suggest ideas for how these can be addressed (either within or outside of the program).

2. Students who participate in assessment that confirms their achievements gain awareness of what they are doing effectively and are thereby helped to develop generalizable success strategies. This benefit is enhanced by faculty discussing with students the assessment process and the outcomes.

3. Exploring successes reinforces instructor motivation and makes the assessment process more encouraging, rather than discouraging.

4. The process of gathering evidence of success and demonstrating the application of insights gained promotes your professional development and supports your program in meeting public accountability expectations.

5. Sharing discoveries regarding successful practices could contribute significantly to your professional field.

However, an important distinction exists between assessing student learning and assessing courses. Also, encouragement to explore successful practices should not be misconstrued as encouragement to use the assessment reporting process to defend one’s effectiveness as an instructor. To be useful, assessment needs to do more than just confirm that a majority of students at the end of a course can demonstrate a particular learning outcome. While this may be evidence of success, it does not reveal much at all about what contributed to the success. Assessment that explores successful practices needs to delve into the questions of what is working, how it is working, why it is working, whom it is working best for, when it is working best, and under what conditions is it working best?

Here are some questions that might help generate some ideas:

• Which of the course SLOs related to the program SLO(s) scheduled for assessment is most interesting or relevant to you?

• Is there anything you do that you think contributes especially effectively to development of the course outcome?

• Have you recently tried anything new that you might want to assess?

• Have students commented on particular aspects of the course?

• Have students’ course evaluations pointed to any particular strengths?

• Are there any external influences (such as industry standards, employer feedback, etc.) that point to strategies of importance?

Again, please see the Assessment Cycle Plans section for more information on formulating assessment questions and identifying appropriate assessment approaches. The information there is applicable at the course level as well.

Planning an Assessment Approach at the Course Level

How can you best measure students’ achievement toward the specific outcome AND gain insights that will help you improve the learning process? The choice of an appropriate measurement technique is highly context specific. However, if you are willing to consider a variety of options and you understand the advantages and disadvantages of each, you will be well prepared to make the selection.

Please keep the following in mind:

• Assessment does not have to be connected with grading. While grading is a form of assessment, there is no need to limit assessment to activities resulting in grades. Sometimes, removal from the grading process can facilitate collection of much more revealing and interesting evidence.

• Assessment does not have to involve every student equally. Sometimes sampling methods make manageable an assessment that would otherwise be unreasonably burdensome to carry out. For example, you may not have time to interview every student in a class, but if you interview a random sample of students, you may be able to obtain essentially the same results in a lot less time. If you have an assignment you grade using course criteria and want to apply it to program-level criteria, you may be able to use a sample rather than re-evaluate every student’s work.

• Assessment does not have to be summative. Summative assessment occurs at the end of the learning process to determine retrospectively how much or well the students learned. Formative assessment, on the other hand, occurs during the learning process, during the developmental phases of learning. Of the two, formative assessment often provides the greatest opportunity for insight into the students’ learning dynamics. In addition, formative assessment enables you to identify gaps in learning along the way, while there is still time to intervene, instead of at the end, when it’s too late to intervene. Consider using both formative and summative assessment.

• It is not necessary that all program faculty use a common assessment approach. A diverse array of assessment approaches can be conducted concurrently by different faculty teaching a wide range of courses within a program and assessing the same outcome. The key is to have group agreement regarding how the outcome manifests, i.e., what it looks like when students have achieved the learning outcome and what criteria are used for its assessment. This can be accomplished with a very precisely worded SLO, a list of SLO component skills, descriptive rubrics (see Using Rubrics to Make Assessment Coherent below), descriptions from industry standards, normed rating scales, checklists, etc. Once a shared vision of the outcome has been established, all means of assessment, no matter how diverse, will address the same end.

• Assessment does not have to meet the standards of publishable research. Unless you hope to publish your research in a peer review journal, your classroom assessment need not be flawless to be useful. In this regard, assessment can be an opportunity for ‘action research.’

• Some assessment approaches may need IRB approval. See IRB and Classroom Research regarding research involving human subjects. And, for further information, see the CNM Handbook for Educational Research.

• Assessment interpretation does not need to be limited to the planned assessment techniques. It is reasonable to bring all pertinent information to bear when trying to understand strengths and gaps in student learning. Remember, the whole point of assessment is to gain useful insights.

• Assessment does not have to be an add-on. Most of the instructional approaches faculty use in their day-to-day teaching lend themselves to assessment of program outcomes. Often, turning an instructional approach into a valuable source of program-level assessment information is just a matter of documenting the findings.

It may be helpful to think of assessment techniques within the five broad categories identified in Table 6.

|Table 8: Common Assessment Measures |

|Written Tests |

|Misconception checks |Pre- or post-tests |Quizzes |

|Preconception checks |Pop quizzes |Standardized exams |

|Document/Artifact Analysis |

|Artwork |Homework |Publications |

|Displays |Journals |Research |

|Electronic presentations |Portfolios |Videos |

|Exhibitions |Projects |Writing assignments |

|Process Observations |

|Auditions |Experiments |Simulations |

|Classroom circulation Demonstrations |Field notes |Speeches |

|Enactments |Performances |Tick marking |

| |Process checklists |Trial runs |

|Interviews |

|Calling on students |Formal interviews |Oral exams |

|Case studies |In-class discussions |Out-of-class discussions |

|Focus groups |Informal interviews |Study-group discussions |

|Surveys |

|Alumni surveys |Employer surveys |Show of hands |

|Clicker questions |Feedback requests |Student surveys |

The lists of techniques above are by no means comprehensive. Plug in your own classroom assessment techniques wherever they seem to fit best. The purpose in categorizing techniques thus is to demonstrate not only some common characteristics but also the variety of options available. Yes, interviews and surveys are legitimate assessment techniques. You need not limit yourself to one paradigm. Consider the possibility of using techniques you may not have previously thought sufficiently valid, and you may begin to see that much of what you are already doing can be used as is, or adapted, for program-level assessment. You may also begin to have more fun with assessment and find the process more relevant to your professional interests.

Some concepts to help you select a measure that will yield the type of evidence you want are presented in the following table. Each pair (left and right) represents a continuum upon which assessment measures can be positioned, depending upon the context in which they are used.

|Table 9: Descriptors Related to Assessment Measures |

|Direct |Student products or performances that|Indirect |Implications that student learning |

| |demonstrate specific learning has | |has taken place (may be in the form |

| |taken place (WCU, 2009) | |of student feedback or third-party |

| | | |input) |

|Objective |Not influenced by personal |Subjective |Based on or influenced by personal |

| |perceptions; impartial, unbiased | |perceptions |

|Quantitative |Expressible in terms of quantity; |Qualitative |Expressible in terms of quality; how|

| |directly, numerically measurable | |good something is |

|Empirical |Based on experience or experiment |Anecdotal |Based on accounts of particular |

| | | |incidents |

Note that objectivity and subjectivity can apply to the type of evidence collected or the interpretation of the evidence. Empirical evidence tends to be more objective than anecdotal evidence. And, interpretation of outcomes with clearly identifiable criteria tends to be more objective than interpretation requiring judgment.

Written tests with multiple-choice, true-false, matching, single-response short-answer/fill-in-the-blank, and/or mathematical-calculation questions are typically direct measures and tend to yield objective, quantitative data. Responses to objective test questions are either right or wrong; therefore, objective test questions remain an assessment staple because the evidence they provide is generally viewed as scientifically valid.

On the other hand, assessing short-answer and essay questions is more accurately described as a form of document analysis. When document/artifact analyses and process observations are conducted using objective standards (such as checklists or well-defined rubrics), these methods can also yield relatively direct, quasi-quantitative evidence. However, the more observations and analyses are filtered through the subjective lens of personal or professional judgment, the more qualitative the evidence. For example, consider the rating of performances by panels of judges. If trained judges are looking for specific criteria that either are or are not present (as with a checklist), the assessment is fairly objective. But, if the judges evaluate the performance based on knowledge of how far each individual performer has progressed, aesthetic impressions, or other qualitative criteria, the assessment is more subjective.

Objectivity is highly prized in U.S. culture. Nonetheless, some subject matter requires a degree of subjectivity for assessment to hit the mark. A work of art, a musical composition, a poem, a short story, or a theatrical performance that contains all the requisite components but shows no creativity, grace, or finesse and fails to make an emotional or aesthetic impression does not demonstrate the same level of achievement as one that creates an impressive gestalt. Not everything worth measuring is objective and quantitative.

Interviews and surveys typically yield indirect, subjective, qualitative, anecdotal evidence and can nonetheless be extremely useful. Soliciting feedback from students, field supervisors, employers, etc., can provide insights into student learning processes and outcomes that are otherwise inaccessible to instructors.

Note that qualitative information is often translated to numerical form for ease of analysis and interpretation. This does not make it quantitative. A common example is the use of Likert scales (named after their originator, Rensis Likert), which typically ask respondents to indicate evaluation or agreement by rating items on a scale. Typically, each rating is associated with a number, but the numbers are symbols of qualitative categories, not direct measurements.

|Table 10: Sample Likert-Scale Items |

| |Poor |Fair |Good |Great |

| |(1) |(2) |(3) |(4) |

|Please rate the clarity of this handbook: |☐ |☐ |☐ |☐ |

| | |Somewhat |Somewhat Agree | |

| |Disagree |Disagree |(3) |Agree |

| |(1) |(2) | |(4) |

|Assessment is fun! |☐ |☐ |☐ |☐ |

In contrast, direct numerical measures such as salary, GPA, age, height, weight, and percentage of correct responses yield quantitative data.

Using Descriptive Rubrics to Make Assessment Coherent

Descriptive rubrics are tools for making evaluation that is inherently subjective more objective. Descriptive rubrics are scoring matrices that differ from rating scales in that they provide specific descriptions of what the outcome looks like at different stages of sophistication. They serve three primary purposes:

1. To communicate learning outcome expectations (to students and/or to faculty).

2. To facilitate fairness and consistency in an instructor’s evaluation of multiple students’ work.

3. To facilitate consistency in ratings among multiple evaluators.

Used in class, descriptive rubrics can help students better understand what the instructor is looking for and better understand the feedback received from the instructor. They can be connected with individual assignments, course SLOs, and ad hoc assessment activities. Used at the program level, rubrics can help program faculty relate their course-level assessment findings to program-level learning outcomes.

Rubrics are particularly useful in qualitative assessment of student work. When grading large numbers of assignments, even the fairest of instructors can be prone to grading fatigue and/or the tendency to subtly shift expectations based on cumulative impressions of the group’s performance capabilities. Using a rubric provides a framework for reference that keeps scoring more consistent.

Additionally, instructors who are all using the same assessment approach in different sections of a course can use a rubric to ensure that they are all assessing outcomes with essentially the same level of rigor. However, to be effective as norming instruments, rubrics need to have distinct, unambiguously defined performance levels.

A rubric used to assess multiple competencies can be referred to as non-linear; whereas, a rubric used to assess several criteria related to a single, broad competency can be described as linear (Tomei, p. 2). Linear rubrics are perhaps most relevant to the program assessment process. A linear rubric that describes progressive levels of achievement within several elements of a broad program SLO can serve as a unifying and norming instrument across the entire program. This is an important point: such a rubric can be used for making sense of disparate findings from a wide variety of assessments carried out in a wide variety of courses – so long as all are related to the same program SLO. Each instructor can reference some portion of a shared rubric in relating his or her classroom assessment findings to the program SLO. It is not necessary that every assessment address every element of the rubric.

The Venn diagram presented in Figure 7 illustrates how a rubric can serve as a unifying tool in program assessment that involves a variety of course assessment techniques. To make the most of this model, faculty need to come together at the end of an assessment period, pool their findings, and collaboratively interpret the totality of the information. Collectively, the faculty can piece the findings together, like a jig-saw puzzle, to get a broader picture of the program’s learning dynamics in relation to the common SLO.

For example, students in entry-level courses might be expected to demonstrate beginning-level skills, especially during formative assessment. However, if students in a capstone course are still demonstrating beginning-level skills, then the faculty, alerted to this, can collectively explore the cross-curricular learning processes and identify strategies for intervention. The information gleaned through the various assessment techniques, since it all relates to the same outcome, provides clues. And, additional anecdotes may help fill in any gaps. As previously noted, assessment interpretation does not need to be limited to the planned assessment techniques.

Rubric Design

To be effective, descriptive rubrics need to focus on competencies, not tasks, and they need to validly describe demonstration of competency at various levels.

Rubric design is highly flexible. You can include as many levels of performance as you want (most people use 3-5) and use whatever performance descriptors you like. To give you some ideas, below are some possible descriptor sets:

• Beginner, Amateur, Professional

• Beginning, Emerging, Developing, Proficient, Exemplary

• Below Expectations, Satisfactory, Exemplary

• Benchmark, Milestone, Capstone

• Needs Improvement, Acceptable, Exceeds Expectations

• Needs Work, Acceptable, Excellent

• Neophyte, Learner, Artist

• Novice, Apprentice, Expert

• Novice, Intermediate, Advanced

• Rudimentary, Skilled, Accomplished

• Undeveloped, Developing, Developed, Advanced

|Table 11: Sample Rubric for Assessment of Decision-Making Skill |

|  |Novice |

| |(0) |

|Area |Usually refers to disciplines within General Education |

|AT |The School of Applied Technologies |

|BIT |The School of Business Information and Technology |

|Case Studies |Anecdotal reports prepared by trained observers. When numerous case studies are analyzed together, the findings are often |

| |considered empirical evidence. |

|CHSS |The School of Communication, Humanities & Social Sciences |

|Construct |A theoretical entity or concept such as aptitude, creativity, engagement, intelligence, interest, or motivation. |

| |Constructs cannot be directly measured, so we assess indicators of their existence. For example, IQ tests assess prior |

| |learning as an indicator of intelligence. |

|Construct Validity |The ability of an assessment approach to actually represent the construct it purports to address. Since constructs cannot |

| |be directly measured, construct validity depends upon how much people can agree upon a definition of the construct and how|

| |truly the indicators represent that definition. |

|Core Competencies |A term formerly used for a group of student learning outcomes assessed by all degree programs. When the core competencies |

| |were dropped from the assessment process, they were more-or-less replaced by the embedded outcomes of critical thinking |

| |and life skills. |

|Correlation |A process of quantifying a relationship between two variables in which an increase in one is associated with an increase |

| |in the other (a positive correlation) or a decrease in the other (a negative correlation). Correlations do not necessarily|

| |demonstrate causal relationships between variables. For example, preparedness for college-level work and number of visits |

| |to hospital emergency rooms may be positively correlated, but it would be a logical fallacy to conclude that one causes |

| |the other. |

|Criterion-Referenced |Looks at individual student performance in reference to specific performance criteria |

|Cross-Sectional Analysis |Studies comparing, at a single point in time, people representing different stages of development, for the purpose of |

| |drawing inferences about how people change/develop over time. E.g., comparing outcomes of students completing a program to|

| |those of students just entering the program. Cross-sectional analyses assume that the less advanced group is essentially |

| |the same as the more advanced group was at that earlier stage. |

|Dean's Council |A weekly meeting of deans and other members of academic leadership. Once a year SAAC gives an annual report to this group.|

|Discipline Area |Usually refers to a non-degree, non-certificate, non-Gen-Ed subject (mostly housed within SAGE) |

|Direct Assessment |Looks at products of student work (work output) that demonstrate learning |

|Embedded Outcomes |The core competencies of critical thinking and life skills. Only degree and certificate programs are charged with |

| |demonstrating how these outcomes are embedded within their program SLOs. |

|Empirical Evidence |Usually refers to what has been systematically observed. May or may not be obtained through scientific experimentation. |

|Exit Competencies |A term formerly used at CNM for program SLOs |

|External Assessment |May be conducted outside of CNM (e.g., licensing exams) or outside of the course or program (e.g., next-level assessment) |

|Formative Assessment |taking place during a developmental phase of learning |

|HLC |Higher Learning Commission, the accrediting body for CNM |

|HWPS |The School of Health, Wellness & Public Safety |

|Indirect Assessment |Looks at indicators of learning other than the students’ work output (e.g., surveys or interviews) |

|Inter-Rater Reliability |Refers to consistency in assessment ratings among different evaluators. For example, if five judges score a performance |

| |with nearly identical ratings, the inter-rater reliability is said to be high. |

|Ipsative Assessment |Comparisons within an individual’s responses or performance. E.g., changes over time or different preferences revealed |

| |through forced-choice responses. |

|Longitudinal Analysis |A comparison of outcomes over time within a given group, based on changes at an individual level (versus differences in |

| |group averages). E.g., pre-test and post-test scores |

|Meta-Assessment Matrix |A method developed by SAAC to track a wide variety of assessment practices for evidence of how the college is performing |

| |as a whole. |

|MSE |The School of Math, Science and Engineering. |

|Next-Level Outcomes |Performance in more advanced courses, employment, internship, transfer, etc., for which a course or group of courses |

| |prepares students. |

|Norm-Referenced Assessment |Analysis of individual or group performance to estimate position relative to a normative (usually peer) group. |

|Objective |Not influenced by personal perceptions; impartial, unbiased. |

|Program |Usually refers to a degree (AA, AS, or AAS) or certificate discipline (but may occasionally be used more broadly to |

| |include areas and disciplines). |

|Program Assessment |An annual process of examining student learning dynamics for strengths and gaps, interpreting the implications of |

| |findings, and developing plans to improve outcomes. |

|Program Review |An annual administrative study of programs, based on quality indicators such as enrollment and completion rates, to |

| |determine the viability of program continuation. |

|Qualitative Evidence |Evidence that cannot be directly measured, such as people’s perceptions, valuations, opinions, effort levels, behaviors, |

| |etc. Many people mistakenly refer to qualitative evidence that has been assigned numerical representations (e.g., |

| |Likert-scale ratings) as quantitative data. The difference is that the qualitative evidence is inherently non-numerical. |

|Quantitative Evidence |Evidence that occurs in traditionally established, measurable units, such as counts of students, grade point averages, |

| |percentages of questions correctly answered, speed of task completion, etc. |

|Reliability |Consistency in outcomes over time or under differing circumstances. For example, if a student challenges a placement exam |

| |repeatedly without doing any study in between and keeps getting essentially the same result, the test has a high level of |

| |reliability. |

|SAAC |Student Academic Assessment Committee – a faculty-driven team that facilitates CNM program assessment. See |

| |. |

|SAAC Annual Report to Deans Council |SAAC provides an annual report to the Deans Council on the state of assessment at CNM. This report includes a |

| |meta-assessment rubric to show the progress of programs toward comprehensive implementation of assessment procedures. |

|SAAC Report |This usually refers to the annual assessment report completed by each program, Gen Ed area, and discipline. The report is |

| |usually prepared by the program chair/director and submitted to the school's SAAC representatives. Loosely, the "SAAC |

| |Report" can also refer to the SAAC Annual Report to Deans Council. |

|SAGE |The School of Adult & General Education. |

|SLO |Student Learning Outcome (competency) – This term carries a dual meaning, as it can refer to a student learning outcome |

| |statement or the outcome itself. |

|Subjective |Based on or influenced by personal perceptions. |

|Summative Assessment |Assessment that takes place at the end of the learning process. |

|Validity |The ability of an assessment approach to measure what it is intended to measure. An aptitude test in which cultural |

| |expectations or socio-economic level significantly influence outcomes may have low validity. |

|Value-Added Assessment |Assessment that looks for confirmation of gains, presumably attributable to an educational (or other) intervention, |

| |typically using longitudinal or cross-sectional analyses. |

References

Adams County School District. Marzano’s taxonomy: Useful verbs [PDF document]. Adams County School District 50 Competency-Based System Wiki. Retrieved from .

American Association of Colleges and Universities. Value rubrics [PDF document]. Retrieved from

Anderson, L. W., & Krathwohl, D. R. (2000). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. London, England: Longman.

Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for college teachers (2nd ed.). San Francisco, CA: Jossey-Bass.

Bloom, B. S. (1956). Taxonomy of educational objectives: The classification of educational goals. New York, NY: McKay.

Higher Learning Commission. Criteria for Accreditation. Retrieved from

Higher Learning Commission (2012). Federal compliance requirements for institutions [web publication]. Retrieved from

Institutional Research Office. (2006). Action verbs for creating learning outcomes [Word document]. Oregon State University. Retrieved from .

Marzano, R. J., & Kendall, J. S. (2008) Designing & assessing educational objectives: Applying the new taxonomy. Thousand Oaks, CA. Corwin Press.

NCTM. (July 2013). Formative assessment: a position of the National Council of Teachers of Mathematics [PDF document]. NCTM web site. Retrieved from

SUNY Council on Assessment. (2014). SCOA institutional effectiveness rubric [PDF document]. State University of New York. Retrieved from

Teaching, Learning and Assessment Center. (2009). Assessment terms and definitions. “Assessment Brown Bag.” West Chester University. Retrieved from

Te@chthought. (2013). 6 alternatives to Bloom’s taxonomy for teachers [Blog]. Retrieved from

Tomei, L.J., Designing effective standards/competencies-aligned rubrics [PDF document]. LiveText. LaGrange, IL: . Retrieved from

U.S. Department of Education (2009). 34 CFR part 602: The Secretary’s recognition of accrediting agencies. EDGAR [Word document]. Retrieved from

Webb, N. L. (2002). Depth-of-knowledge levels for four content areas [PDF document]. Wisconsin Center for Education Research. Retrieved from .

Appendix 1: CNM Assessment Cycle Plan Form

[pic]



Appendix 2: Guide for Completion of Cycle Plan

[pic]

[pic]



Appendix 3: Opt-In Cycle Plan Feedback Rubric

[pic]



Appendix 4: CNM Annual Assessment Report Form

(This form will be revised in Fall 2016.)

Appendix 5: Guide for Completion of Assessment Report

[pic][pic]

(This guide will be revised in Fall 2016.)

Appendix 6: Opt-In Assessment Report Feedback Rubric

[pic]

(This rubric will be revised in Fall 2016.)

Appendix 7: Rubric for Assessing Learning Outcomes Assessment Processes

|0 |The Basics

1 |2 |For Additional Effectiveness

3 |4 |For Greatest Effectiveness

5 | |Identification of Student Learning Outcomes | |The Student Learning Outcomes (SLOs) reflect the institution’s mission and core values and are directly aligned to any overarching goals or regulations. | |The SLOs represent knowledge and/or capacities that are relevant to the field and meaningful to the involved faculty. | |The SLOs are consequential to the field and/or to society as a whole. | |Articulation of SLOs | |The SLOs are clearly stated, realistic, and measurable descriptions of learning expectations. | |The SLOs are designed to clarify for students how they will be expected to demonstrate learning. | |The SLOs focus on the development of higher-level skills. (See Benjamin Bloom’s taxonomies of the cognitive, affective, and psychomotor domains.) | |Assessment Processes | |Assessments are created, implemented, and interpreted by the discipline/program faculty. | |The assessment process is an integral and prominent component of the teaching/learning process rather than a final adjunct to it. | |The assessment process is a closed loop in which valid and reliable measures of student learning inform strategic changes. | |Relevance of Assessment Measures | |Assessment measures are authentic, arising from actual assignments and learning experiences and reflecting what the faculty think is worth learning, not just what is easy to measure. | |Assessment measures sample student learning at formative and summative stages, and results are weighted and/or interpreted accordingly (e.g., low-weighted formative assessments used to provide early feedback /intervention). | |Assessment measures promote insight into conditions under which students learn best. | |Alignment of Assessment Measures | |Assessment measures focus on experiences that lead to the identified SLOs. | |Assessment measures reflect progression in complexity and demands as students advance through courses/programs. | |Assessment measures are flexible, allowing for adaptation to specific learning experiences, while maintaining continuity with identified SLOs. | |Validity of Assessment Measures | |Assessment measures focus on student learning, not course outcome statistics, and provide evidence upon which to evaluate students’ progress toward identified SLOs. | |Direct and/or indirect, qualitative and/or quantitative, formative and/or summative measures are carefully selected based upon the nature of the learning being assessed and the ability of the measure to support valid inferences about student learning. | |Multiple measures are used to assess multidimensional, integrated learning processes, as revealed in performance over time. | |Reliability of Assessment

Measures | |Assessment measures used across multiple courses or course sections are normed for consistent scoring and/or interpretation. | |Assessment measures are regularly examined for biases that may disadvantage particular student groups. | |When assessment measures are modified/improved, historical comparisons are cautiously interpreted. | |Reporting of Assessment Findings | |Reporting of assessment results honors the privacy and dignity of all involved. | |Reporting provides a thorough, accurate description of the assessment measures implemented and the results obtained. | |Reporting includes reflection upon the effectiveness of the assessment measures in obtaining the desired information. | |Interpretation of Assessment Findings | |Interpretation of assessment findings acknowledges limitations and possible misinterpretations. | |Interpretation focuses on actionable findings that are representative of the teaching and learning processes. | |Interpretation draws inferences, applies informed judgment as to the meaning and utility of the evidence, and identifies implications for improvement. | |Action Planning Based on Assessment | |Assessment findings and interpretation are applied toward development of a written action plan. | |The action plan proposes specific steps to improve student learning and may also identify future assessment approaches. | |The action plan includes a critical analysis of the obstacles to student learning and seeks genuine solutions, whether curricular or co-curricular in nature. | |



Appendix 8: New Mexico Higher Education Department General Education Core Course Transfer Module Competencies

The following seven tables present the competencies required for courses to be included in the corresponding areas of the New Mexico General Education Core Course Transfer Module.



-----------------------

Central New Mexico Community College

Ursula Waln, Senior Director of Outcomes and Assessment

2016-18 Edition

Assessment You Can Use!

CNM HANDBOOK FOR

OUTCOMES ASSESSMENT

Periods at the Ends?

As a rule of thumb, periods are used at the ends of items in numbered or bulleted lists if each item forms a complete sentence when joined to the introductory statement. SLO statements fit this description.

Figure 3: Bloom’s Taxonomy of the Cognitive Domain

ORIGINAL REVISED

[pic][pic]

Figure 4: Bloom’s Taxonomies of the Affective and Psychomotor Domains

AFFECTIVE PSYCHOMOTOR

[pic][pic]

Assessing for Assessment’s Sake?

Assessment is a tool, not an end in itself. And, like any tool, its purpose is to help you accomplish something.

It is much more fun to produce something worth-while than to demonstrate mastery of a tool.

Connecting Assessment with Specific Activities

Imagine a program is assessing the SLO “Analyze and evaluate texts written in different literary and non-literary genres, historical periods, and contexts.” The instructor’s closely related course SLO is “Interpret and analyze diverse and unfamiliar texts in thoughtful, well-informed, and original ways.” Having noticed during discussions following written analyses that some students alter their views upon hearing other students’ interpreta-tions, the instructor decides to assess the intentional use of guided class discussion to develop interpretive and analytic skills. To assess the impact, she will have students revise and resubmit their papers following the guided discussion. Then, she will compare the performance on the papers pre-and post-discussion, using a rubric, and look at individual gains.

Figure 7: Using Rubrics to Pool Findings from Diverse Assessment Approaches

NCTM Position

Through formative assessment, students develop a clear under-standing of learning targets and receive feedback that helps them to improve. In addition, by applying formative strategies such as asking strategic questions, providing students with immediate feedback, and engaging students in self-reflection, teachers receive evidence of students’ reasoning and misconceptions to use in adjusting instruction. By receiving formative feedback, students learn how to assess themselves and how to improve their own learning. At the core of formative assessment is an understanding of the influence that assessment has on student motivation and the need for students to actively monitor and engage in their learning. The use of formative assessment has been shown to result in higher achievement. The National Council of Teachers of Mathematics strongly endorses the integration of formative assessment strategies into daily instruction.

NCTM, July 2013

Review of Student Outcome Data

[?] |()*+GHIJRSTnopqrstuv’“òîêòêæß×Ó×À²©²“À²ˆyˆgyˆyÀSÀ²An institution shall demonstrate that, wherever applicable to its programs, its consideration of outcome data in evaluating the success of its students and its programs includes course completion, job placement, and licensing examination information.

HLC Policy FDCR.A.10.080

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download