Swmay.files.wordpress.com



Evaluation of Students and Its Effects on Motivation“Human beings have an innate inner drive to be autonomous, self-determined, and connected to one another. And when that drive is liberated, people achieve more.”~ Daniel PinkIntroductionLast semester, I designed a writing unit for my English 101 class on the topic of evaluating students’ writing. During this unit, we read “A Proposal to Abolish Grading” by Paul Goodman, we watched Daniel Pink’s video “Drive: The Surprising Truth about what Motivates Us” produced by RSA Animate, and I gave the students grading exercises, one of which was to grade my own writing. After introducing the topic (and before any readings were assigned), I asked the students to take a position: students who thought grades were problematic sat on one side of the class; students who thought it was the best possible system sat on the other. They could switch sides at any point during the unit. Surprisingly, the class divided in half. Even more interesting was the range of topics the class brought up during the discussions. We discussed enculturation, issues with norming grades, the mysteries of holistic grading, objective grading, and we questioned the real value of a grade as a currency that exists only in academia. Even the quietest student in the class, Amanda, brought up grade inflation, “Shouldn’t As be harder to get in college?” I knew I would learn from them, but I was impressed by their expertise on the subject and their willingness to challenge me.I offered the unit plan to encourage students to think about their education. As a college teacher, I feel part of my responsibility is to challenge and motivate students to question what they are doing and why they are doing it as they began their college-level careers. As Alfie Kohn says in the first chapter of Punished by Rewards, “The time to worry is when an idea is so widely shared that we no longer even notice it, when it … feels like plain common sense” (3). Students do well to consider their reasons for choosing a major, what classes they should take and why, but they should also take a moment to consider how and why they are graded. Likewise, teachers would do well to ask how and why they are grading. For example, Anthony, a student who took part in the unit on grading, told me during an individual conference, “I’ll tell a friend about an A I got on a paper, but I will never brag about what I learned today.” I laughed at first, but I had a different and deeper reaction a moment later. For Anthony, bragging about the trophy of an A over learning was “the way that it is.” He had been conditioned to value a letter grade over the opportunity of learning. As a teacher, I felt deeply dissatisfied with and responsible for Anthony’s response. What Anthony said was so obvious that I had failed to recognize it, even laughing at the statement’s trueness. I felt uneasy because bragging about an A – rather than learning – seems like common sense, although it shouldn’t. Learning should be the goal, not the grade.Kohn cautions, “at the point when objections are not answered anymore because they are no longer even raised, we are not in control: We do not have the idea, it has us” (3). Although some teachers and theorists are asking questions about how and why we are evaluating students, I’m not sure the system doesn’t have us. Many – if not all – teachers want our students to be motivated to learn, but our too familiar structure of evaluations may be limiting our efforts to find a solution. Likewise, we may not be asking the right questions about how we evaluate students’ work. Most importantly, we do not ask the right people – students – often enough. I argue that educators are responsible for considering solutions to this apparent problem. Before I continue, I want to define several key terms I use through the remainder of this thesis: Evaluation is an umbrella term that refers to the process of grading, commenting, and/or responding to students writing. Grades are evaluative indicators that communicate a comparison to an outside standard of success. Grades provide only this limited, comparative information to students. Comments are any symbol or marginal note that attempts to communicate what is or is not successful in paper such as, “good” or “vague”. Although they are more informative than grades, comments are mainly evaluative as they point to specific places of error. Responses are distinct from comments in that they speak to the writer about writing and/or the process of writing. Responding focuses less on evaluating and more on communicating to the writer as a reader of the text. To learn more about how students feel and think about evaluations, I interviewed eight students about their reactions to evaluations over the course of a summer English 101 class at a two-year college. During these interviews, I recorded students’ verbal reactions and took surveys. My goal was to learn how students were motivated and discouraged by grades, comments and responses. Going into the study, I expected them to value the letter grade over comments, and I also expected students to be encouraged by high letter grades and praise. These conditions seemed to make common sense, hardly questionable. Why wouldn’t an A grade or praise motivate a student to write more and better? My goal was to sit and listen and to provide students a voice into the conversation and debates about evaluations, something that I believe is still somewhat lacking. However, through the process of the study, I learned that students’ responses to evaluations are not nearly as predictable as I anticipated. Since I began my study, every student or teacher with whom I have discussed the topic of grading or evaluation has been able to recall at least one grade and comment they received on a paper. Not only can we recall grades and comments, but we can do so with crystal clarity. In many cases, we can picture where the comment was located on the paper, we can recite the comment verbatim, and sometimes we can recall what color ink was used by the much loved or hated instructor. Whether comments and/or the grades were positive or negative, they imprint on our memory, suggesting supreme importance and value. Yet, too often, scholars have ignored this phenomenon and instead have argued about the process of evaluations in a vacuum far away from the impressionable minds of the audience.This is not to say that including more student perspectives in research on grading will make the answers to our evaluation problems apparent; evaluating writing is complicated. To put it mildly, grading students’ papers is a conundrum. In fact, as Edward White claims, most students hate to be graded, and most teachers hate to give grades (73). Still, as teachers, we collectively practice grading as an essential part of teaching, and we see “evaluations as a way of helping students learn more effectively” (White iii, iv). On one hand, we grade students’ writing as a means to compare their work with some sort of marker, benchmark, or standard (Tchudi xv) devised by us – the teacher – and/or by our institutions. We use this administrative sorting device to label and define students, assess standards of the institution, and/or attempt some sort of universal standard (White iii). On the other hand, writing instructors attempt to use “evaluation as a personalized teaching device, to help students learn more effectively” (White iii). However, recent research tells us that we are not sure this instruction is all that effective. In fact, we have very little sense of how this personalized feedback impacts students’ writing (Knoblauch and Brannon) or their emotional and motivational responses to evaluations. Richard Boyd claims that after over a century of being evaluated, our students have deeply internalized the consequences of the grade. Students like Anthony certainly seem to understand that grades are defining labels in our education system. Through standardized test scores, assessments, and placement tests, we have insisted that scores and grades are the end-all and the be-all (Elbow). Since grading is the most widely used measuring tactics in composition classrooms, teachers of composition – if they take the time to consider the process of evaluations – struggle to address students’ writing in the face of these concerns (Connors and Lunsford 201). Not surprisingly, “writing helpful comments is one of the skills most teachers wish to develop” (Connors and Lunsford 203). Although course design and teaching strategies are taught thoroughly in teacher certification and graduate programs, teachers usually do not receive formal training on evaluating student writing (Smith 249). As a consequence, we often imitate the ways we ourselves were evaluated as students, which is not always the best approach. The words we scribble in margins are often the same words our teachers scribbled on ours (Sommers 248).Despite the abundance of information on the subject of grading and evaluating “most of the conversation and literature on evaluation is theoretical and centers on the problems of what constitutes good writing […] and how to demonstrate, describe and measure it” (Zak and Weaver xiv). Collectively, the trend in research on grading tends to focus on matters such as the origins of grading, debates over grading process or product, or “advances in efficiency in the production of responses” (Haswell). As a result, the bulk of rhetoric on evaluation theory centers mainly on the “we” of the grading community instead of on the students who are ultimately affected by the process. Of course, not all theory on evaluation is self-concerning. Theorists such as Straub, Odell, Sommers, Knoblauch and Brannon have engaged with students about the types of comments they prefer, the type of comments or responses that yield better final drafts, and strategies for validating students’ intentions with writing. However, evaluations have important affective consequences on writers that has not been explored as fully. I argue that the conversation surrounding evaluation should be expanded to consider the direct effects evaluations have on students’ motivational responses. In other words, composition studies should bring more attention to how teachers can support, encourage, motivate, and challenge students to improve their writing and writing processes. I am suggesting a conversation that begins to see evaluation as a different kind of force, one that Linda Brodkey argues should move away from the aversion to writing that composition evaluation has established in students (220). We should “want to give students reasons to believe that they can do well” in order to motivate them to want to write better (Holaday 37). Focusing on the ways students are motivated and affected by our grading practices may remind us how and why we grade, and whom we are grading. But before we can begin to speculate on how evaluations might better support and motivate students in composition, we need to listen to our students’ points of view. We also need to know what to listen for. For this reason, I wanted to be able to identify current pedagogical arguments while listening to the students during my study. I researched motivational theory, historical uses of our grading structure, arguments related to grading rubrics (as rubrics were a major part of the evaluation methods used during the study), and theoretical approaches to commenting and responding to students’ writing. With this awareness, I looked closely at the transcripts of the participants’ remarks and answers during my study in order to gain a better understanding of their needs and desires as student writers. The aim was to discover what type of evaluations motivate participants of the study, create a conversation between teachers and students, and discover areas of further study and attention. Literature ReviewWhat is Motivation?Since my thesis explores the affective nature of student evaluations, how evaluations motivate students is my primary concern. In a perfect world, all of our evaluation methods would compel our students to do react according to our responses. However, motivation is complicated, and instructors of composition are far behind in implementing the research on motivation into evaluation methods. Motivation, as defined by Edward Deci and Richard Ryan, is “to be moved to do something” (“Intrinsic and Extrinsic”54), which is the ideal response we seek from students when we evaluate papers. Through Deci and Ryan’s thirty years of research, we know that “motivation is generally divided into two separate categories” (Docan 2): Extrinsic motivation – motivation that is based on a system of rewards and/or punishment. Extrinsic motivation is dependent on external positive or negative reinforcers. Students who are extrinsically motivated are moved to do something only to get a reward or avoid punishment, for example, a high grade or a low grade. Therefore, grades are the driving force to perform.Intrinsic motivation – motivation based on an innate striving for growth. Intrinsic motivation needs no reinforcement; it is the drive to improve for the sake of improvement. Intrinsically motivated students are moved to do something to gain a skill, knowledge, or another form of personal growth. These students might write a paper out of curiosity, or simply because the challenge was put before them. Deci and Ryan argue that motivation has emerged to be an important science for educators. Educators of all types, they argue, “face the perennial task of fostering more versus less motivation in those around them” (“Intrinsic and Extrinsic” 54). In the same way, Daniel Pink argues that the best motivation “promotes greater conceptual understanding, better grades, [and] enhanced persistence at school (89). Our job, then, they argue, is to facilitate and not undermine the best type of motivation within our students (“Intrinsic and Extrinsic” 58). Complicating matters, Deci and Ryan have recently re-categorized the types of motivation. These categories are autonomous motivation and controlled motivation ( “Self-Determination Theory”182). They argue that autonomous motivation includes both intrinsic motivation and types of extrinsic motivation that people have integrated into their sense of value and self (“Self-Determination Theory”182). Students who are autonomously motivated “experience volition, or a self-endorsement of their actions” (“Self-Determination Theory” 182). Yet, autonomous motivators may include extrinsic rewards. For example, any student who values learning and personal growth generally sees grades, extrinsic motivators, as informational symbols. This type of student is compelled to grow and improve while recognizing that the reward of the letter grade marks their progress of learning and not the reason for performing. An A is a symbol of having learned while a B or C (or lower) is a symbol of the need to improve.Controlled motivation, on the other hand, consists of external regulation only, in which one’s behavior is a function of reward or punishment. Controlled motivation also includes social factors such as seeking approval and avoiding of shame (“Self-Determination Theory” 182). Students who experience grades as controlled motivators work for the grade in order to receive something more external (e.g., approval from teachers, parents, and/or peers.) They may also work to avoid the punishment and shame of low letter grades. In the case of controlled motivators, either response gives the grade power to affect self-esteem and self-efficacy. An A to this type of student is seen as a reward, and a lower letter grade is seen as punishment. The goal, then, is not to improve or learn for the sake of growth; the goal is to get the reward or avoid the punishment. In this case, grades work as motivators, but the work they do, compared to autonomous motivation, “has typically been characterized as pale and impoverished – even if powerful (“Intrinsic and Extrinsic” 55).Complications with MotivatorsThrough the lens of Deci and Ryan’s research, evaluating students’ writing becomes complicated, but it may help raise our awareness of potential problems. One problem with facilitating the best motivators in our evaluations may be the idea that many of us act as though we are already teaching autonomously motivated students. We may assume that the evaluative practices we subscribe to are encouraging students in much the same way intrinsic motivators work. In other words, teachers may view evaluation methods as communication with an autonomously motivated audience, who sees evaluations as information for the purpose of their growth. However, the reality may be far more complicated. Communication may break down when more extrinsically motivated students see evaluations as judgments, punishments, rewards, or the lack thereof. For example, Jane may be autonomously motivated to succeed as a writer. Jane may then see a comment such as the classic “awk” handwritten in the margins as information, albeit not all that informative. “Awk,” to a student like Jane, may be perceived as advice to revise the sentence. Because Jane has her own drive to improve, the comment is somewhat informative to Jane and she will work to improve the sentence and her writing. However, John may see the academic world through the lenses of controlled motivators. To a student like John, the mark of “awk” might be seen as a critique of his capabilities as a writer. Most of what John understands is that he doesn’t like the mark of error. Therefore, the comment does not motivated John to improve or understand the error in his writing; John just wants to avoid the mark of error in the future. He deletes the awkward sentence instead of improving on it. What this means for teachers is that our efforts to communicate with students may be far more accessible to misperception than we normally consider. Teachers may think their audience is made up of Janes while the reality may be quite different. The question then is: What type of evaluation methods give Jane what she needs, and communicate to John that our evaluations are there to help him improve and not judge?Grades as Motivators Unfortunately, our grading system was designed to work on and facilitates what we now know to be controlled motivators. Connors and Lunsford recognize that grades were designed more as administrative judgments with little concern for student improvement or motivations (137). According to Boyd, grades worked as more of an “exercise to impart social control” as they became a “grading calculus that linked scholarship to character” (7). He argues that as writing courses began to take shape in Harvard in 1869, grades were used to reinforce motives by instilling a sense of fear in the student body through threats of failure and expulsion (8). Harvard’s president called the system a “campaign for character” that “safeguards against sloth, vulgarity and depravity” (Boyd 8). Likewise, in the early 20th century, the University of Illinois’s policy for grading writing suggested that students afflicted with error should be subjected to sanctions and penalties (Boyd 12). Boyd argues that the objective was to “suppress errors through rather draconian methods and secure [the university’s] systemized grading by means of distinctly violent measures” (13). It seems then, by synthesizing Boyd’s findings with those of Deci and Ryan, that grades were originally used as demeaning evaluations, which undermined autonomous motivation. If this is true, grades were/are powerful motivators, albeit through techniques of shame and punishment, thereby influencing students to behave more like John and less like Jane.Grades as Negative Motivators If we consider motivation through negative incentives such as punishments and shame in the same way Alfie Kohn has, we know that “punishments are harsher and more overt” than rewards (15, 26). We also know they work. For example, I once witnessed my sister firmly pat the back of my nephew’s seven year old hand after he had been hitting his younger sister, Katie. My sister sternly gave the command, “Brian, don’t hit your sister,” and then spanked his hand. Despite the contradictory nature of the less, Brian got the message, and Katie was left alone. We don’t hit our students – anymore. However, as Kohn argues, a system of punishment in academia still exists. Although today’s discourse about grading and education skirts around the language of punishment and shame, it, nonetheless, remains a significant part of our grading system. According to Kohn, instead of giving a punishing grade for poor writing habits as we did in the first half of the 20th century, teachers now see low letter grades as “consequences” for turning in a paper wrought with error. A “consequence,” then, is simply “a code word for punishment” (21). Although the language of our grading system has changed considerably in the last century, it still runs mainly on the same principles of punishment. We know this system of motivation works; many just don’t like how they work. However, we have much more to consider than how punishments motivate students; it is what they motivate students to do. A recent study found that “[s]tudents who receive negative incentives will exhibit higher levels of motivation than those who receive positive incentives” (Docan 26). However, the problem with negative incentives is that they only “teach what not to do” (Kohn 15). In other words, like John’s reaction to the “awk” comment and Brian’s reaction to the pat on the hand, negative incentives teach little more than a desire to avoid the punishment. John didn’t learn a writing skill, and Brian didn’t learn why he shouldn’t hit his little sister. To make matters worse, some argue that negative incentives cause further harm to motivation in the long run. For example, Kohn found that “punishments may produce resentment rather than responsibility” (21) and, therefore, discourage students from growth. And, Brene Brown found that an educational system based on punishment and shame can lead students to “disengage to protect themselves” (185). As a method of self-preservation, students who experience the worst of our grading system will eventually stop tolerating it and will ultimately stop caring about their education (Brown 185). According to Brown, “shame breeds fear. It crushes out tolerance for vulnerability, thereby killing engagement, innovation, creativity, productivity and trust” (188).Grades as Positive MotivatorsKohn argues that rewards may not be as crushing as punishment and shame, but they are every bit as controlling. Therefore, even an evaluation system based on rewards may not improve students’ motivation levels. Kohn argues that a teaching philosophy of “Do this and you will get that” is not much different than “Don’t do this and you won’t get that” (5). Still, in recent decades, it seems we have preferred to approach motivation through reward. We have attempted to remove with great intent the ugly, punishing side of education by replacing it with “stickers, stars, certificates, awards, and trophies,” but we kept the letter grades (Kohn 11). In fact, Kohn cites studies that have shown, even at the university level, “that praise consistently led to impairment in skilled performance” (98). Kohn argues that the erroneous use of praise is universal as it is “assumed that if you add an inducement (such as money or grades) to do something, an individual’s motivation to do it will automatically increase” (71). Surprisingly to many, rewards limit motivation (Deci and Ryan, Kohn, Pink). According to Kohn, when students “are led to expect a social reward, their interest in a task declines” even if the reward only promotes a limited higher social standing (101). What is fascinating about these findings is that rewards, in the face of popular belief, “work to buy off one’s intrinsic motivation for an activity” (Deci qtd in Kohn 70), making the system of reward unlikely to instill the sought after goal of autonomous motivation within a student body. According to Kohn, “somehow our intrinsic interests evaporate after rewards [are] introduced” (Kohn 71). In addition, studies have found that rewards have many other negative consequences as well. Just like punishments, these studies show that the long term effect of rewards is not so great. Kohn contends that, “if the student doesn’t show sufficient progress in order to get the reward, the entire exercise is likely to lead to further alienation, an even more negative self-image” (61). On the other hand, if the student does get the high letter grade, the reward itself is damaging: “the more rewards are used, the more they seem to be needed” (Kohn 17). The difference between punishment and reward to Kohn is that “rewards simply control through seduction rather than by force” (26). In other words, all we may have done in recent decades is moved from a system of punishing grades to punishing our students by rewarding grades (Kohn). In terms of our system of evaluating students’ writing, grades may be the least effective form of motivation, yet we continue to place our values on them. Placing importance on tests, scores, and grades, Elbow argues that by leading students to get so hung up on these over-simple quantitative verdicts that students care more about scores than about learning, which, as Deci and Ryan argue, is the problem of controlled motivation. Therefore, instead of valuing autonomous motivators through systems that encourage information sharing, learning, and growth, “we are breeding a need for more control, which then is used to justify the use of control” (Kohn 33). In other words, since we have used a system based on controlled motivators for so long, the idea of having a system without controlled motivators seems improbable at best, even if we understand that the current system does not put learning first. Yet, the research of Deci, Ryan, Kohn, Pink, Elbow, et.al., seems to suggest that teachers may only need to redirect students’ attention from grades and towards efforts of communication in order to better facilitate autonomous motivation. If threats, punishments, and shame, along with virtually every type of expected tangible reward made contingent on task performance, undermine autonomous motivation (“Intrinsic and Extrinsic” 59), then what facilitates such motivation? Through research of autonomous versus controlled motivation in a classroom setting, Deci and Ryan discovered that “autonomy-supportive teachers catalyze in their students greater intrinsic motivation, curiosity, and the desire for challenge” (60). Pink also makes this argument. In Drive: The Surprising Truth About what Motivates Us, Pink places the same importance on autonomy if the work to be done is heuristic not algorithmic (27). Heuristic work is work or problem solving that is not formulaic but may have many complex or “novel” solutions, which sounds remarkably like the writing process. Pink uses the activity of designing an ad campaign as an example of heuristic work, but many may argue that writing a composition paper also applies. In these scenarios, autonomy is one of the essential aspects of motivation (95). According to Pink, when workers can choose when to work, what techniques they can use to work, and who they can work with, the more productive they become (92). In effect, then, this argument applies to composition students as well. Giving students the opportunity for self-direction and freedom from directive evaluations “appear[s] to enhance intrinsic motivation, as they afford a greater sense of autonomy” (“Intrinsic and Extrinsic” 58, 63). Since grades and comments such as “awk” do the opposite, as they limit student control, we must look to other forms of evaluation to facilitate autonomous motivation. Moving Beyond Grades as Motivators:Another common tool of evaluation is rubrics. Rubrics are well-intended tools of assessment. Initially designed by university professors, they were calibrated scales that could measure "good" composition in schools across the nation (Turley and Gallagher 88), not much differently than the original design of our grading system, as Boyd argues. Rubrics may then be another form of external regulation that undermines academic freedom, which may prevent the liberation of “drive” that Pink discusses. Regardless of their original use, writing instructors have brought rubrics into the classroom for several reasons such as an attempt to “norm” grading, a fair evaluation tool, as shared criteria by which compositions will be evaluated, and as an appropriate way to validate grades. Rubrics, according to Spandel, can also be an “interactive, interpretive process, in which a teacher’s wisdom, insight, experience, and judgment play an important role” (20) in the experience of the writer. However, others have found rubrics to be “menu[s] of generic comments, […] clumsy in practice and in theory” (Wilson 63). Although scholars such as Tchudi have argued that rubrics offer more information than grades, Wilson argues that the information rubrics provide is “prepackaged and processed,” (63). A study that focused on cognitive level writing improvements found that “rating essays according to a rubric does not perform the same function as [evaluating] an essay” (Penner 454) meaning that personalized responses provide a deeper and more informational function when responding to students’ writing. Although many teachers use rubrics to represent a consensus of the values of a community of writers, Turley and Gallagher argue that they may work better as a “launching point for conversation: a place to start, and never a place to end” (91). Rubrics, then, may work better to stimulate classroom discussions about writing but do not achieve much in the way of providing information or motivation as a tool to evaluate a writing project. However, Spandel argues that if students are required to take part in the design of classroom rubrics, rubrics can be part of the conversation and information sharing required in the process of writing (19), which may be a process that facilitates autonomous motivation as students take ownership of the evaluation process. Comments as Motivators:Despite our grade-centered culture, some studies have found that students are surprisingly “avid” for commentary (Haswell). As Knoblauch and Brannon argue, “nothing we do as writing teachers is more valuable than our commenting on individual student texts in order to facilitate improvement” (69). In addition, some research shows that students strongly resent the idea that they only look at the grade (Smith 329) while others argue that students actually “care deeply about the comments they receive” (Sommers 251), which may mean not all is lost in our efforts to motivate students the best way we can. Since responses provide, or attempt to provide, the much needed information that grades do not as Elbow and Sommers argues, responses on evaluated papers can add value and importance to the – motivationally speaking – impoverished task of writing for a letter grade. Responses may be of value to students because they affirm them as writers (Sommers 251) and can build students’ self-confidence and motivation required to enjoy the process of writing. Despite this encouraging evidence, “it seems that the time-honored effort of inscribing commentary on students’ papers is sometimes honored with a proven pay-off and sometimes not” (Haswell). According to Knoblauch and Brannon, there is “scarcely a shred of empirical evidence” that has been able to establish a definite finding that students comprehend or use comments to modify their writing skills (69). Reasons for this phenomenon remain largely unclear. Regardless, the sometimes successful nature of comments may be due to how comments behave as motivators. A rarely studied pedagogical tool, comments may act as controlled or autonomous motivators or may have no motivational facilitators at all. In other words, purely negative (critical) comments and purely positive (praising) comments may act as controlled motivators in the same way grades have been established. For example, many studies have shown that students receiving critical comments display less desirable attitudes toward some facet of writing than do those who receive praising comments; “none, however, has shown differences in the quality of writing” (Hillocks78), which may indicate that critical comments and praising comments have the same level of motivational factors. When comments are personalized and informative; however, they may act as better facilitators of autonomous motivation (Knoblauch and Brannon 69). As we know negative comments and positive comments can also be informational. In these cases they are, essentially, negative constructive comments and positive constructive comments. Finally, obscure symbols and shorthand comments may not have any motivational value to students. Positive CommentsAwareness about the harmful affects of negative comments is growing thanks to studies and arguments by Connors and Lunsford, Elbow, and Sommers et al.; however, this message may influence teachers to focus on positive and praising comments as an alternative as seen in “Learning to Praise” by Donald Draiker. For example, in Thomas C. Gee found that “students seem to have more patience in working on the compositions if they think they will be rewarded for what they do well and if they are encouraged along the way” (44). Other studies have shown similar findings but have also suggested that “no necessary connection between higher motivation” exists with these students (Knoblauch and Brannon 70). In addition, as I have already discussed, there are great disadvantages when it comes to using rewards. Positive comments, if used as rewards only, can make one dependent on someone else’s approval (Kohn 96). Moreover, “praising [students] for the work they do may discourage self-directed learning, since it is our verbal [or written] rewards, and not love of what they are doing, that drive them” (Kohn 105). This may be why Frances Zak, unlike Gee, found in her study that “there was no significant difference in performance” between what she calls the “Regular Class,” students who receive her usual “corrections, commenting and advice giving” and the “Positive Only Class,” student who received only positive comments (51). However, Zak did not pay as close attention to students’ reactions in her study as did Gee. Zak focused her attention on writing improvements as opposed to the writing experience: a common process when scholars discuss responding to students’ writing. Regardless, Zak found that both classes improved after a semester, but neither significantly more or less than the other (51). Haswell argues that “students rightfully are leery of praise when it is general and gives little direction,” which, again, points to the disadvantages of non-informational comments. Although praise can be helpful to writers as some studies have found that students strongly support knowing what they did well on papers, not just what they did incorrectly (Smith 331), Sommers argues that “when such praise is not paired with constructive criticism, it has the opposite effect” (251). Praise “often stalls writers” because they are not being “asked to do anything differently” (251). Essentially then, praise on its own does little to communicate growth to the student, which seems essential to autonomous motivation. Information as MotivatorsIf information is key to students’ motivation to improve their writing, then many coded or cryptic comments may not convey enough meaningful information to have any effect. Motivation is to be moved to do something; however, Sommers calls these types of marks and squiggles “underwhelming comments” (250). According to Sommers, these comments often go unread and unused (250) and are, therefore, neither useful information nor do they stimulate any type of motivation. Likewise, Jean King (qtd in Knoblauch and Brannon 70) found that “outright correction of errors,” “naming kinds of errors,” and “offering rules” do not improve students’ aptitude for writing. This may be bad news for teachers who often dwell negatively on surface mistakes and word choice while leaving a trail of underwhelming comments in their path. Some may find it discouraging and disappointing that the students are less likely to pay attention to circles around grammar, spelling, and documentation errors (Smith 330), but, as many argue, unless these types of marks are paired with responses, they are likely to be inconsequential to students’ writing.If critical comments along with praising comments promote only a lower form of motivation the same way grades do, then valuable information that promotes awareness, growth, and self-efficacy could be the best facilitators of autonomous motivation. More evidence on this matter comes from Sommers where she found that 90 percent of students urge faculty to give more specific comments (251). Accordingly, Sommers argues that “[t]houghtful responses to writing further validate and enrich the writing efforts and experience with students” (251). Denise Blake also discovered that what her students wanted was a “description of their strengths and weaknesses with specific prescriptions for improvement” (88). Interestingly enough, this request for better, more detailed responses on papers came from students who had received an A (Blake 88), which may add to evidence that responses have more motivational value to students than letter grades or comments. In turn, Blake assigned her students to comment to her responses on evaluated papers before she recorded the final grade of the assignment. She found herself responding to their responses to her responses (88). She concludes that it was her “purpose to get more active participation in the evaluation process,” and that her students “looked forward to the opportunity to express themselves in the evaluation process” (88, 89). This finding suggests that students value discussion, collaboration, and “the chance to revisit a paper and to talk with someone about their writing” (Ketter, Hunter 104). This type of discourse through responses often moves students forward as writers “because [responses] resonate with some aspect of their writing that our students are already thinking about” (Sommers 250). Motivational responses come from listening and communicating:Sperling and Freedman argue in their article “A Good Girl Writes like a Good Girl” that communicating with students through evaluations may lead to “unavoidable” confusion (127) even with the best students (113). What their article does best is to breach the idea of miscommunication in such a way that it becomes apparent that miscommunication can come in all forms between any student and any teacher evaluation. However, this evidence should not deter us from communicating but encourage us to understand our communication efforts. Which begs the question – as Odell asks – how, then, do student writers respond to their readers’ responses? Fortunately, all teachers may need to do is ask. As O’Neill and Fife argue, “students’ voices can teach us about response in the writing classroom” (190).Many theorists are concentrating on this approach. For example Straub’s study considers what comments students prefer, Odell looks at how teacher responses “help shape both the form and the content of a final draft” (221), J. Sommers considers “how can we know students’ intentions with their writing,” and Knoblauch and Brannon discuss how facilitative and directive comments work to improve a student writers’ drafts. While Straub, Odell, J. Sommers, N. Sommers (as mentioned above), Knoblauch, and Brannon, et al. are listening and engaging with students about evaluations in the sense of what teaches writers to improve, all of which is necessary to the field, O’Neill and Fife found while listening to the students in their study, they “realized that a primary focus on the written text in teacher response research is inadequate in explaining how the students – the intended audience for those comments – read them because written comments function within a larger contextual framework” (199). In other words, the discourse within evaluations merges with the discourse between student and teacher during class time, which suggests that the door to discussing evaluations with students is already open. This is good news in that “teachers need not work alone if they can find ways to get their students to talk about their values regarding writing” (O’Neill and Fife 128). We must then develop a dialogue with students in our responses and in class discussion. According to Sommers responses play “a leading role in … writing development when, but only when, students and teachers create a partnership through feedback – a transaction in which teachers engage with their students by treating them as apprentice scholars, offering honest critique paired with instruction” (250). Therefore, listening and communicating with our students about evaluations in the classroom and in conferences may be one of the better approaches to building motivated students as it both enables us to learn about our students and builds a comfortable and encouraging discourse with them. Mainly, when it comes to motivating our students in the best sense of the word, we may do better to remember that many students value an “opportunity to engage with an instructor through feedback” (Sommers 251) and about feedback. If we grant them this opportunity, they may even grow to value responses to their writing more than a letter grade. My study is similar to O’Neill and Fife’s study in that they set out to emphasize students’ reading and reactions to teacher comments and responses. Much like O’Neill and Fife, I want to do my part in “expand[ing] our conception of the response situation to encompass all the interchanges about evaluations” (191). Where O’Neill and Fife decide to concentrate on the context in which students read responses to writing such as, the context of the previous teacher’s comments, and the context of classroom discussion, I chose to focus on how evaluations work to motivate students through the writing process. In the next chapters, I discuss how the participants in the study were or were not motivated through the interactions they had with the instructor through grades, comments and responses. In other words, I look at the communication between instructor and student to see what information moves students to “want to do something”, and I explore in what ways those communicative situations can be improved. MethodsI interviewed students from a single composition class during an eight week Summer 2013 composition course at an area two-year college. The purpose for this project was to listen and learn how this group of students responded to the written evaluations they received on their the writing. The survey and interview questions were designed to study the possible changes in student’s motivation to write, emotional reactions to evaluations, and confidence in their ability to write after receiving the evaluations. The Environment: The study was conducted at an accredited, public, two-year college. This college serves a rural community in the Midwest with a student body of roughly 8,000 students per academic year. During the summer, the institution offers a hybrid course – using both face-to- face class-time and synchronous online learning – during the eight-week program. It is in this hybrid course that I studied. The impact of class design was not the focus of the study. The composition class met every Tuesday and Thursday with online course work on Mondays and Wednesdays. I observed every Tuesday and Thursday class. Although I had access to the online forum, the forum was inconsequential in this study. Furthermore, I was not allowed access to any emailed correspondence between the instructor and the students, nor was I included in any one-on-one conferences during or after-class meetings. The Participants:All names used in this study are pseudonyms chosen by the participants. I included two types of participants in this study. The first is the instructor of class who allowed me to interview the participants and allowed me access to their evaluated papers. Lan, is a faculty instructor with 17 years teaching experience at the institution where the study took place. Since the study is dependent on his evaluations, his evaluation method is particularly important. So, after the semester ended, I interviewed Lan about his grading philosophy. He described himself as an editor marking errors while limiting comments. He predominantly uses his rubric as a form of evaluation (rubric example in Appendix C). I discuss Lan’s evaluation philosophy in greater detail in chapter 3. Lan’s evaluation method was unknown to me prior to the start of the study and played no part in determining the choice of his class or methods to research. First-year college composition students make up the second type of participant. These students volunteered for the study on the first class day of the semester. Nineteen students were presented with the option to take part in the study. Nine volunteered, with eight completing the study. The ninth student was withdrawn from the course during the second week. One of the remaining eight students never completed the surveys. Only the data collected from the eight students who completed the course are represented in the results.I did not systematically record the age or background information of the participants. The demographic information below was compiled through information given by the students either during the interview portion of the study, during observed class discussion, or during brief personal interaction outside of the classroom or study. I use the term traditional student to mean that the student enrolled in college directly after graduating from high school or after serving one term in the military or reserves. Those students listed as high school seniors are students who have not received a diploma of completion from secondary education but have enrolled in college-level composition. Those listed as non-traditional students are students who mentioned that their time away from academia played some sort of role in their academic performance. The demographic of the nine students varies:Jack: Traditional StudentJackie: High School SeniorKimber: High School SeniorLynn: Non-traditional Student, MotherSarah: Non- traditional Student, Military experience, MotherSandra: Traditional StudentSean: Traditional Student, Military experienceUnique: Non-traditional Student (withdrawn from the course after the first interview and survey)Victoria: Non-traditional Student, Mother.The surveys: After the second day of class meetings, I surveyed eight of the nine students. One student did not attend this meeting and never completed the survey. Another student was withdrawn from the course after completing the survey. During this meeting, I had the participants choose their own pseudonyms. The survey asked the following questions:?How strong of a writer are you??How comfortable do you feel writing college-level papers??How anxious are you beginning this writing class??What grade do you believe you will receive in this class??How important are teachers’ comments on your writing??How important to you is your grade in this class?(Survey in Appendix A)The primary goal of this survey was to gain an understanding of the students’ perceived writing ability, values of education, level of writing anxiety, and the perceived importance of the instructor’s role in a composition class before going into the interview process. The secondary goal of the survey was to compare the information to the second survey taken at the end of the semester. This second survey was initially offered through email after the semester was over. Only two surveys were completed through email. Five additional surveys were collected by phone several weeks after the end of the summer semester. Seven of the original nine students completed this survey. The survey asked the following questions:?How much has your writing improved over the semester??How did your grades make you feel about your writing??How helpful were the instructor’s comments in improving your writing??How strong of a writer do you feel you are now??How comfortable are you writing at a college-level??What grade do you believe you will receive in your next college-level writing class?(Survey in Appendix B) The primary goal of this survey was to record the changes in perceptions and confidence of each participant. The secondary goal was to mark and describe the variation between each student’s prediction of their overall grade for the course to their actual grade for the course and to the prediction of their overall grade for their next college-level writing class.The interviews: Through this portion of the study, I wanted to match the research structure to the subject of which I was studying. In other words, I felt that if I intended to focus on the motivational and emotional aspect of students’ evaluations, then I would have to create a study that allowed for that evidence to emerge. I designed this study based on a focus group setting, which would allow a comfortable environment for students to communicate with me. Since the premise of my study was to provide a vehicle for students to voice their ideas on the subject of evaluations, I simply wanted to listen to the students talk. Before I held each interview, I collected the evaluated papers from the previous assignment. I used this information to familiarize myself with the results of each of the participant’s paper. During each interview, I had a list of the student’s names with their letter grade and a sample of comments each had received on their work. This way, I had for the comments made during the interview. I went into each interview expecting them to last about 30 minutes but I did not limit the students to a time restraint. If the students were willing to talk, I was willing to listen. Likewise, students were free to come and go during the interview.I held a total of four interview days. The first, second, and fourth interviews took place the class day after the students received their evaluations. These interviews were with the focus group. The third day of interviews took the form of two one-on-one interviews: The first with Sean, the second with Lynn. During the semester, the students were assigned five papers; however, the class did not meet after the fifth and last paper was evaluated. I used a different set of questions for each interview. I designed my questions after key points from studying the comments and grades they received. These points were based on topics I wanted the students to consider at that time. For example, in the first interview I wanted to know whether the students understood the instructor’s rubric and comments; in the second interview, I wanted them to consider if they felt that they learned more about writing or more about how the instructor grades the papers. However, the interviews were in not confined to these topics. The questions I asked were intended to stimulate discussion in order to tease out the students’ experiences. During the final interview, the students hardly needed me there at all as they discussed their reactions openly.The first interview: The first interview was held after the Tuesday class during the 3rd week of the program. The students had turned in their assignments the prior Thursday and had received their letter grades via the college’s course management system the Monday before the interview. Most students had not read the evaluations or comments until the morning of the interview. This interview lasted approximately 30 minutes. Unique (withdrawn student), Sarah, Sean, Jack, Kimber, Sandra, Lynn and Victoria were present for this interview. Jackie was absent.Preliminary questions for the first interview:Who feels discouraged?Who feels motivated? Who feels challenged? Who feels under challenged? How helpful are the comments for you?Do you like Lan’s (instructor) grading rubric/comment style?what do you like/don’t you like? As this is your first college composition grade, what are your impressions of college level writing? What does a grade mean about you? Does it mean anything?Does it make a difference if the grade is for Math/History....etc?What is significant about a grade with writing?Second Interview:This interview was conducted after the Tuesday class during the 5th week of the program. The paper was due one full week prior to this interview. As with the first essay, letter grades were distributed electronically on Monday, the day before the interview. However, for this interview, the students did have access to the comments and evaluations the day before the interview was held. This interview lasted 35 minutes.Sarah, Sean, Jackie, Kimber, Sandra, Victoria were present for this interview. Jack and Lynn were absent (Unique was withdrawn). Preliminary questions for the second interview: What were your difficulties with this last paper?How did you approach this assignment differently from the first?Did your last grade effect how you wrote this paper? Did Lan’s (instructor) comments change your approach?Did anyone think something along the lines of, “I think this is what Lan wants from me?”What did you learn from the first grade? Did you learn more about your writing, more about who you are as a writer, or did you learn more about Lan? Where do you go from here?What is your plan?What do you have to do to improve?Does anyone think they can’t improve and just hope for the best?Is anyone considering dropping this class at this point? ??? Third interview:The day the third interview was scheduled, three students were absent and two students had mentioned they would not be able to attend the interview after the class. In turn, I cancelled the group meeting. Since I was not going to be able to record a group meeting within a time frame I deemed suitable in order to capture the students’ authentic responses to comments and evaluations, I decided to use the lost time as an opportunity to interview some of the quieter students individually. This interview was not necessarily about the third paper, but more of an opportunity to hear what the students may have wanted to say but didn’t in the focus group setting. I selected four students to interview the next class day. Only two students, Lynn and Sean, agreed to a one-on-one interview format. Each of these interviews lasted for 15 minutes. Sean’s preliminary interview questions: Do you feel you can improve as a writer?What makes you feel you are not a good writer?How did writing tutoring help you?Will you continue to see the help of a tutor?Will you take advantage of the rewrite option for the first three papers?Does Lan make you feel better or worse about your writing?Lynn’s preliminary interview questions: How did you improve so dramatically (second worse grade to second best) in class?How much has the tutoring helped you? Do you communicate with Lan in person or through email?How do the comments of praise make you feel about writing? Did the first lower grade of D+ motivate you?What kind of student were you in high school? Fourth and Final Interview:This interview was conducted after the Tuesday class during the 7th week of the program. Like the first paper, the paper was due the Thursday prior to the interview. As with the first essay and second essays, letter grades were distributed electronically on Monday, the day before the interview. All of the remaining eight participants in the study attended this interview. This interview was based less on my questions as the focus group took more control of the discussion. Although I did ask a few key questions (listed below), I mainly engaged in the conversation and allowed it to take its own course. The group discussed grading fairness, lagging motivation, teacher responses verse the rubric, and class and grading design such as a full semester writing class and different forms of grading. The last topic led me to ask their opinions on pass/fail courses and class structures without grades. This interview lasted for 58 minutes.Preliminary questions for the fourth and final interview: Has anyone received tutoring for this class?Would you rather have an A- or a B++/Good Job?Do grades prohibit you from taking risks?Describe the feeling of frustration when receiving a less desirable gradeWould you rather have a pass/fail course or class like Lan’s?Interview with the instructor:A week after the semester ended and the final grades had been turned in, I interviewed Lan. During the study, I felt a need to gain a better understanding of his grading philosophy in order to fully appreciate the conversations I had observed. Questions for this interview:What is your grading philosophy?How has your grading philosophy changed during your years teaching?Why has it changed?Why do you sometimes leave a positive comment after the letter gradeWhat is the difference between a B++ and an A-Coding the Data:After transcribing the audio recorded interviews, I began coding the responses to the questions and the general conversations during the interviews. I broke down the comments into six categories: Emotional responses including indifferenceResponses on Fairness such as point values and validity of feedbackCommunication with the professor outside of the classroom – including emails, or after-class discussions, Responses on Confusion such as questions regarding the rubric, is a comment meant to be good or badOutside Factors such as influence from previous teachers and tutoring during the summer semesterResponses that pertained to writing for the grade such as risk taking As the emotional responses emerged as such a large category, I broke them down into subcategories:PositiveNegativeIndifferenceMotivation/PositiveMotivation/Negative I also coded by keyword and phrase searches. I searched for words or phrases such as “hurt,” “pain”, “scared” and any mention of a letter grade describing the person such as, “I’m an A Student.” I did not code the instructor’s interview as the study did not focus attention on the instructor’s reactions. This post-semester interview’s purpose was to learn background information on the purpose of the evaluation methods the participants were reacting to. ResultsThrough this results section, I discuss what may be best described as students’ complicated responses to an instructor’s attempt to find success through simplifying evaluation methods. To describe my findings in detail, I first discuss how students perceived the grading rubric used and designed by the instructor; second, I discuss and compare two students’ experiences as a case study; third, I discuss how grades motivated the students; and last, I describe the differing perspectives of evaluations between the teacher and the students.One of the most significant finding with this focus group came through the surveys held at the beginning and at the end of the semester. According to the survey, every participant strongly agreed that teachers’ comments on their papers were extremely important. However, this consensus was short lived. By the end of the eight-week semester, all but two students, Victoria and Lynn, said that teachers’ comments are not nearly as important to them as they were at the start of the semester. Based on the student responses I received during this study, I believe the greatest factor in this significant drop of importance in teachers’ comments was the lack of personalized comments received by the participants. Furthermore, it appears that the instructor’s use of a grading rubric during the study gave him a false sense of having provided adequate feedback, and his rubric created an unsatisfied and confused reaction among his students. The RubricBy the second interview, I began to notice a trend in Lan’s commenting and responding style on evaluated papers. For example, he heavily depended on the rubric to convey information. He did this by underlining sections or descriptors within the rubric. Within the context of the students’ writing, his comments were mostly directive consisting of circles, or single words such as “cliché” that directed more exact attention to errors. Sometimes he would ask brief rhetorical questions in the margins about word choice or logic. Only four handwritten comments out of 88 – not including circles or symbols – on the first and second essays were positive or praising. Without remarking on the statistics of Lan’s comments, I asked the students how they felt about Lan’s style of commenting. I received the following four comments in reply:Jackie: I think it would help a little bit more like to write some more positive stuff. I mean, I know he teaches quite a few classes and grades a lot of papers but, I don’t know, I just feel like he could put a little more effort in to helping us out on our papers.Sandra: It makes me want to get back more praise. Like I can still remember on my past papers exactly what [a previous teacher] wrote and right where it was on the paper. Because that is such a good feeling to be praised by a teacher!Sarah: Aside from the praise part, if you don’t know what your strong points are, if no one is telling “Hey, this is what you are doing good in. Keep that up! Now you need to work on this” then you might fail next time. Well, I didn’t know I had a good introduction. I didn’t know I had a good topic sentence. You know what I mean?Jack: The only [ink] on my paper was on a couple of places about my formatting. I don’t know what he is referring to in my paper because he didn’t mark anything in my paper really. What these comments suggest is a multitude of reactions from feeling unfulfilled, to lack of direction, to confusion, just to name a few. In Jackie’s and Sandra’s responses, it is clear that they both desired a sense of reward from positive comments. In other words, much like Kohn argues, both students exhibited a need for more controlled motivation, i.e. praise or punishment, through Lan’s comments. Jackie suggested that Lan is negligent in doing his job as she commented on his lack of effort and the possibility that he may be overworked. Sandra seemed to say that the praise she had grown accustomed to receiving from previous writing instructors was missing from her writing experience. What I gathered from these responses is that without the praise from a writing instructor, the satisfaction for which they write had been reduced. Jackie missed a sense of caring, and helpfulness. Sandra missed uplifting rewards by doing well. Sarah, too, desires more praise but for different reasons. Sarah argues that the lack of positive comments leaves her questioning what she has done well in her writing. In other words, without positive criticism from Lan she has no idea what her strengths are, and without the acknowledgement of success, she is just as likely to change good writing habits as bad ones. The underlying tone in Sarah’s comment is a fear of ignorance. Therefore, what Sarah is seeking is less praise and more information. She greatly values knowing what to do as well as knowing what not to do. Sarah shows a strong sense of autonomous motivation, i.e. a drive to learn and improve, as she wants to improve for the sake of improvement. Finally, Jack’s reaction is similar to Sarah’s as the lack of feedback left him feeling as if he received no clear direction. His remark about not knowing what Lan was referring to is addressing Lan’s rubric (see appendix C) as I will describe later. Overall, Jack was confused by Lan’s evaluations from the moment he received his first graded essay. “Is it good or bad? It doesn’t say on here,” he told me while holding up his first essay. More than a general confusion about the rubric, Jack’s comment “the only ink on here” seems to suggest that he was conditioned to expect something more from a writing instructor’s evaluations. What is interesting is that Lan’s grading philosophy, developed after 17 years teaching composition at a two-year college, was designed to combat these reactions. In describing his grading philosophy during our interview after the end of the semester, Lan had this to say:My philosophy is as of now more of an editor offering feedback that the student can then use as a spring board to figure out what they do well and what their challenges are in their writing and use it … as a vehicle to … engage the revision process in the future. By that I mean, the use of a grading rubric, which at the end, highlights five major carriers, and strays away from commentary or comments along the essay itself, is an approach to grading where it is almost as if I’m detached from the paper …. It gives me the ability to review their work without dealing with it as if this paper is meant for me. Despite Lan’s good intentions, there is significant miscommunication between Lan and his students. The first is that although Lan believed he was offering feedback that explains “what they do well,” his students seemed blind to his efforts. The second is that although Lan aimed to highlight his five “carriers,”Focus, Organization, Development, Readability, and Mechanics by placing a letter grade next to each defined carrier on the grading rubric his students did not understand his system. The third is that Lan’s students wanted responses from an engaged reader while Lan tried to detach from their writing process and respond more as an editor marking errors. Furthermore, as the students and I observed, Lan acknowledged that he “rarely ever make[s] a comment like ‘what an excellent paper’ or ‘this is a really great paper.’” These types of comments do not fit into his grading philosophy. To explain this philosophy further Lan used the movie All the President’s Men as an anecdote: By and large when Carl Bernstein and Bob Woodward would write a story and give it to their editor, the editor didn’t feel compelled, nor were they expecting it, to have him say, “Boy you guys clearly worked really hard this story. I’m really proud of you, I’m glad you are here, I’m glad we are part of a team together, but maybe you should do these things differently.” That wasn’t the relationship. In order to assume the role of the editor, as opposed to a friend or family member, it is a conscious effort for Lan to “detach” himself and his comments from the students’ contents in their papers particularly with praise. By taking on the role of an editor, Lan believed he was teaching an important part of professionalism in the writing process. As far as praise went, to Lan “the grading rubric in of itself is an opportunity to […] pat them on the back if they have done well and steer them in the right direction.” His reasoning was as follows:For each of those five categories, if they see all five categories having, you know, G (good) next to them, it stands to reason that they will be able to figure out, “Ok, this was a good overall paper. […] That’s why I don’t make a whole lot of global, big picture, comments about the paper. Although Lan’s reasoning behind his rubric is logical – that students should be able to recognize the letter G on the rubric the same as they would recognize a “Good Job” handwritten in the margins, this method seemed to have made his feedback virtually invisible or meaningless to his students. In terms of motivation, it appeared that the only motivational factors through the evaluations could be found in his system of letter grades. As no information, personal connection, or autonomy was conveyed through the rubric or comments, very little motivational facilitators exist in Lan’s evaluation method. Therefore, this philosophy of detaching from the text by using an impersonal grading rubric may be the root of the dramatic decrease in the degree to which Lan’s students valued teachers’ comments. By detaching from the writer and by writing comments void of any connection with the writing itself, it seemed that Lan was damaging the importance of what he was trying to communicate through comments on students’ work. For example, Jack said, “I don’t know what he is referring to in my paper because he didn’t mark anything in my paper really.” The handwritten comments on his paper mostly pointed to format errors such as margin sizes, unnecessary spacing, the word “flow”, and “Novels are italicized.” However, on the grading rubric attached to Jack’s paper, Jack received a “G” for Organization, Development, Readability and Overall. On Focus and Mechanics, Jack received an A (acceptable). In other words, where Lan believed he provided Jack with four “good job” comments, Jack believed the only ink on his paper concerned formatting. In other words, Lan’s feedback consisted of praising comments that failed to register as any form of motivation for Jack. By the end of the semester, Lan’s rubric held little value for students other than the overall grade on the bottom of the page. Sarah’s reaction is even more compelling than Jack’s. Because Sarah clearly shows autonomous motivations, I found her reaction to have the most to say in this discussion. When Sarah said, “if you don’t know what your strong points are … then you might fail next time.” She directly contradicts Lan’s argument that students should have no problem recognizing a “G” on Mechanics in the same way they might a “good job” handwritten in the margins. Nor does the rubric seem to provide Sarah with enough explicit information regarding the reason for praise. For example, on Sarah’s second paper, which prefaced the second interview, she had received the following marks on her rubric: E (excellent) on Focus and Development, G (good) on Mechanics and Readability, and A (acceptable) for Organization. The only other marks were three circles around “I” and “you”. But as Sarah expressed her frustration with a lack of positive feedback, she clearly did not see two “Excellent work!” comments and two “Good Job!” comments. In fact, Sarah explicitly said about the rubric, “just a few handwritten notes would be better.” “Maybe”, she said, “if I look at [the rubric] better,” and then she stopped. I can only speculate that she meant that had she looked at the rubric more closely, she may see the positive comments hidden in codes of the letters E,G and A. However, as I noted earlier, Sarah is really looking less for praise and more for information, and the chances of her finding it in the rubric are slim. For example, an A in organization does little to inform Sarah on the particulars of what in her organization needed work. At the end of the first interview session, Sarah asked me directly, “Do you know how the rubric works?” I suggested she talk with Lan to find the information she was looking for. Sarah’s desire for more information drove her to communicate with Lan outside of the classroom. During the interview portions of the study, Sarah twice responded to complaints of Lan’s evaluation style with “if you want to know, then go and ask,” which suggests that she had frequent personal exchanges with Lan outside of the classroom. Although I do not know the content of the exchanges between Lan and Sarah during these moments, I can speculate that Sarah received the information she wasn’t getting from the rubrics, which could be why she suggested each student should communicate with Lan outside of the classroom. Overall, Sarah, who earned the highest final grade in the class, also dramatically lowered her opinion of the usefulness of instructor’s comments by the end of the semester. It appears that Sarah found little to no facilitators of autonomous motivation within Lan’s style of written feedback. Therefore, she lost interest in that genre of evaluation, and, instead, sought the assistance she needed through other means of communication. In addition to these student frustrations, the participants in the study expressed other confusion about the rubric. During the second interview, students questioned the meaning behind Lan’s marks on the rubric. For example, Sandra said, “I’m so confused about how he picks the [percentage grade]. He has letters [E, G on the rubric] then there is randomly a number. If [the paper is] out of 100, where does he get that number from?” Moreover, other students expressed confusion with other marks on the rubric. As Lan has defined each of his five carriers (see appendix) such as, “Organization: The paper’s major points are clearly and logically organized. Topic sentences, well written and concise, are used in a variety of ways…” he underlines aspects that apply to the writing performance of the student as he reads their papers. For example, in Jack’s paper, under the description of Focus, Lan had underlined “The function of each sentence is clear to the reader.” However, it was unclear to Victoria, Jack, Sarah, and Kimber as to whether the phrases underlined noted areas of improvement or success. The phrase “function of each sentence” alone brought confusion. When I asked what the students thought “function of the sentence” meant, there was a prolonged group discussion. The general consensus was, “it means you don’t need that sentence.”With all of this having been said, it may come as no surprise that as the results of the survey show, that students largely dropped the level of opinion of importance of teacher’s comments within the short span of the eight week program. The reasons for this decline of importance might be best summed up in Sandra’s description of Lan’ evaluations: “His [comments] are just like something you could obviously check off a list,” which sounds closely related to Sommers’ “rubber stamp” argument. Sandra’s remark displays both an accurate description of Lan’s self-described approach to comments as well as the general consensus of the participants of the study. As the students became accustomed to underwhelming comments, they valued Lan’s remarks less, which is a significant loss of a powerful communication tool in a writing class. At the beginning of the semester, the participants thought instructor comments were important and planned to use this tool as a large part of the learning and writing process. However, most students ultimately found this tool to be an unimportant part of their success. Moreover, as little to no personal communication occurred between Lan and his students through evaluation efforts, Lan had little opportunity to become privy to the students’ perception of his evaluations. Case study: Sean and LynnIn order to disclose better details on ways motivation was found – or wasn’t found – in the study, I chose to look closely at two students’ writing experiences through the semester and compare them together. Sean and Lynn are excellent students to compare because they had many similarities at the start of the semester. On the initial survey, they marked the lowest value under strength of writing, they predicted the same letter grade of C as an overall score for the course, and marked the highest value under the importance of the grade and teacher’s comments. In addition to survey similarities, both students received a D (Lynn a D +) on the first assignment, and, to my knowledge, they were the only two students who worked with the only writing consultant at the college. However, as the semester unfolded, Sean’s and Lynn’s writing experiences seemed dramatically different. Although they both started from identical situations (as far as the study observed), Sean and Lynn ended the semester with completely different outcomes. These outcomes were not limited to the final grade for the course but included their overall satisfaction with the writing experience. Sean continued to believe that he was incapable of improving his writing, and Lynn discovered that she was capable of writing successful and praiseworthy compositions. Sean began the semester with a high level of anxiety. He was quiet in class and during the study’s group interviews. However, Sean would often approach me outside of the classroom to discuss what he considered to be his lack of progress. Sean’s problematic history with writing became apparent during these interactions. For this reason, I invited Sean to have a one-on-one interview with me. During this interview, I learned that he had a long history with C letter grades and his writing. This history played a huge role in Sean’s poor self-esteem as a writer: “I might be able to increase by maybe 5 percent,” he told me near the beginning of the semester. It was the reason he predicted his C grade for the course and for his high anxiety answer in the initial survey. Regardless of what seemed like consistently average scores all through high school, Sean continued to value higher grades and had a strong aversion to lower grades. In much the same way that Deci, Ryan, and Kohn argue that controlled motivation works to avoid punishment, when Sean received the D grade on his first paper, it motivated him to work harder. Moreover, Sean never avoided responsibility for his writing performance. For example, about his first grade he told me, “I mean, that’s my fault. I didn’t proofread it enough.” Furthermore, Sean began to seek my attention after class, sometimes even chasing me down in the parking lot, which seemed to suggest a drive – or at least concern – to want to succeed. With the improvement to a C on his second paper, he temporarily believed he could improve. “I would have given myself a lot worse grade,” he told me after he looked back on the first paper of the semester. Sean could discover flaws in his own writing, which indicates an aptitude for improvement. With the motivation created from raising his grade from a D to a C, Sean hoped to improve even more with the help of a writing tutor at the college. It seemed he believed he could do better. After seeking writing help and prior to receiving the third evaluated essay, Sean told me, “I worked much harder on this one. I feel pretty good about it. I think I did pretty well. I proofread it and got a tutor.” At this point in the semester, Sean seemed enthusiastic about his education and the possibility of receiving a grade higher than a C. Unfortunately, Sean received another C. It was at this point that Sean caught me after class again. He was clearly angry as he spoke quickly while complaining about the outcome of his third paper. He had taken the advice of a fellow student, Sarah, who had convinced him to seek writing help, and I believe he was initially proud that he had done so. Regardless, Sean’s performance resulted in the same letter grade as his second paper.On the fourth paper, Sean admitted to having only worked an hour-and-a-half before turning it in. Again, he received another C. “Since I got a C whether I work hard or not, I’m not even going to try [on the fifth paper],” he told me during the final interview. To clarify, I asked him if he was going to work with the writing tutor again. His response was, “Oh no. Fuck it, I will just turn it in like it is.” He received a C on the final paper as well. I argue, on the basis of this project, that the second C letter grade (the third grade of the semester) caused significant harm to Sean’s motivation whereas the lower D on the first paper propelled it. It is hard to capture his enthusiasm about his work on the third paper, but clearly Sean was proud to have made the effort to seek tutoring. But because the reward on the third paper did not match the effort he believed it deserved and because the grade reinforced his old beliefs about grades (that he is and will always be a C writer), Sean became indifferent and decided to test the accuracy of college-level grading with the fourth and fifth papers. Because of the negative connotations of an average grade, he gave up. To Sean, a letter grade of C held no motivation facilitators. His own hypothesis that he was a C writer was reinforced, not once but twice, and Sean’s belief that he cannot improve his writing has been dramatically strengthened. Sean’s overall letter grade for the course was a C as well, just as he predicted.Like Sean, Lynn began the semester with nervousness probably stemming back to high school where she considered herself “a terrible student.” She too was quiet during classroom activities and during the interview sessions for this project. Lynn had a difficult start. “I knew I was going to have a hard time with this class. I was freaking out,” she told me during the final interview. After receiving her first letter grade, a D+, Lynn said, “I’m not gonna lie, I cried over this grade.” She openly admitted that she was “definitely discouraged” and felt as though she had no business in a college-level writing course. And much like Sean, the low grade motivated her to seek tutoring. Unlike Sean, however, after Lynn received the help of the on-campus writing consultant on her writing assignments, the added commitment paid off. She told me, “After I got back my first paper, when I thought, ‘No, this is not going to happen,’ I sought tutoring and it jumped me from a D to a B.” In fact, after receiving another B (a B++ to be exact) on her third paper, Lynn’s grades continued to improve. She received an A grade on her last two papers, and she finished the course with an overall score of B+. Like Sean, Lynn seemed to be motivated through controlled motivation. Her goals seemed to be based mainly on the letter grades of the class. As I have already mentioned, the two students began on what I consider a level playing field, and the grade seemed to be their biggest motivator. Nevertheless, Lynn’s attitude remained determined whereas Sean’s motivation disappeared. This determination is best seen in the differences of time and effort each student placed on their final papers of the semester. For example, Lynn said:It takes me five days to complete a paper. You know I have to brainstorm, and brainstorm, and brainstorm. And so, I’m not like these other students who are like, “gosh, I just wrote my paper like 4 hours ago” and then they get Bs and As.This statement shows a great difference in Sean’s approach as he admitted to having only worked “an hour and a half” on his fourth paper. Where Lynn expressed that she consistently spent many hours with each paper, Sean had all but given up. Although both students believe writing is a difficult task for them, Lynn’s comments during the interviews suggest that she kept at it throughout the semester. However, Lynn’s initial efforts were rewarded while Sean’s were not, which suggests a strong connection between the letter grade and student commitment and motivation towards writing. The only difference in the grades between Sean’s and Lynn’s writing experiences through the first three papers is that Lynn received two Bs while Sean received two Cs. This difference of only one letter grade seems mild in comparison to the significant differences between the writing experiences of the two students. However, I believe that the experience both students had through evaluations and interaction with Lan – not just the letter grade – helped determine the level of commitment and motivation that ultimately led to Sean’s overall score of a C to Lynn’s overall score of a B+. In other words, I do not believe that the motivation through the letter grades was the only determiner in the different affects of the class. Looking deeper, beyond the letter grade on the students’ papers, another difference emerges in the evaluation process of Sean and Lynn’s work. At the beginning of this results section, I mentioned that only two participants’ still said that they found teachers’ comments important. Lynn was one of them; Sean had dropped his rating the most dramatically. As there is very little evidence that shows Lan had any dialog with Sean on a personal level, there is much evidence that shows Lynn received personalized responses from Lan through paper evaluations, emails and one-on-one conferencing. For example, when I interviewed Lynn, she said, “I talked with [Lan] a lot about [writing]. When he doesn’t have time to go through my whole paper, my tutor does.” In addition, Lynn said that she regularly communicated with Lan through email and in person. Lan confirmed Lynn’s statement during his interview with me. In fact, he stated that he met with Lynn more than all of the other students from the class combined. From the evidence of this study, I believe that this discrepancy played the most teacher-controlled significance in the different outcomes between Sean and Lynn. The personalized feedback Lan provided Lynn gave her a distinct and powerful benefit of continued motivation throughout the semester. In addition, the lack of personalized feedback Sean received left him with a disadvantage having no motivational facilitators within his evaluations or through personal contact with Lan.In Sean’s case, Lan followed his grading philosophy closely. If he commented on Sean’s work, the comments were limited to writing problems, task-oriented comments, circles around problem areas. Similar to the way Jackie, Sandra, Sarah and Jack failed to recognize positive feedback on Lan’s grading rubric, Sean failed to recognize these types of handwritten comments in the margins. During the last interview, Sean mentioned “I didn’t get any comments on my [fourth] paper. I guess cause I got a C on it.” But, in fact, Sean did receive comments on his paper. The comments he received are as follows:Strengthen connection between ideasFunction?Flow?Is this previewed? Could be improved to reflect a critique, [Sean].Since Sean failed to recognize these marks and feedback as comments, it suggests that his idea of response is something more personable. One could argue that Sean never read his comments; however, considering the focus of this research, I find that highly improbable. Nonetheless, if we consider the possibility that Sean does not recognize comments like “function” and “flow” as valuable or having any motivators within them, then in Sean’s perspective, nothing was said about his work as a writer. This is especially true if Sean doesn’t know what “function” and “flow” mean. In other words, the response Sean perceived from Lan was silence. No feedback at all. In addition, Sean’s remark that “I guess cause I got a C on it” suggests that he believes he is not a good enough writer to even deserve a response from his instructor. If my interpretation is true, then the silence that Sean believes he received on his earlier papers could be particularly damaging to his motivation. Not only did Sean receive more C grades, which he predicted, but he didn’t receive any responses to his writing. Keeping in mind that Sean made this remark about the fourth paper, a paper he admittedly did not care much about, it begs the question how he considered the same lack of response on his earlier work that he did care about.Considering the content of Sean’s first paper, a paper that focused on the personality of each student writer, the idea of silent treatment from Lan is even more disturbing. In the first paper, Sean detailed his difficulty with his speech patterns. In his paper he wrote:I have grown up with a stutter for as long as I was able to talk. I can usually tell which words I’m going to stutter before they’re said. So in order to counteract this, I think of a different word to say that I won’t stutter on. This has become second nature to me from so much practice with my disability.Although Sean’s tool to avoid stuttering has become second nature for him, his difficulty with speaking remains obvious. Moreover, on an individual conference handout prior to the submission of the first paper, on which Lan requires each student to list at least two questions, Sean’s first question was: “The stuttering example seemed to stray. How should I work it in since it is a big part of my personality.” However, no mention of what Sean sees as a disability is mentioned in the comments left on his paper. The comments received on this paper are:Reconsider topic sentence Save this for the closing. Instead include a preview of your major pointsWhere does the quote end?No title attempted[Sean], Clearly, revising and editing your work for grammar and readability errors is a key area of improvement.Since Sean may not recognize the feedback he received as comments on his writing, what does this silence say to a student who is describing his disability? Considering Sean said during an interview, “I’m horrible with making words flow. Like, I don’t have a big vocabulary,” he may consider his stuttering closely linked to his writing. Still, as we know, Sean’s motivation to better his writing improved after this paper, which may have stemmed from a desire to receive personal comments and praise motivation from Lan. Instead, Sean gave up as his grades plateaued at a C level and no responses were made about his writing. In Lynn’s case, Lan deviated from his evaluation philosophy by leaving responses that appealed to Lynn’s emotions. For example, on the first paper, the same paper Lynn received a D+ and Sean a D, Lan had written, “P.S. Don’t be discouraged; just learn from it.” On the third assignment, Lan had included an extra + to Lynn’s B+ letter grade and added the comment, “A very good job, [Lynn].” In contrast, while Sean believed his papers lacked comments, Lynn paid close attention to what was said on her work: “He even wrote at the bottom ‘Don’t be discouraged. Learn from it,’” Lynn told me during our one-on-one interview. Word-for-word she quoted a post-script comment from her first paper three weeks earlier. She continued: I was like, he understands that obviously [the D+] was going to be a hard hit to me, and that I actually care about this class, or any for that matter. But even that little word of encouragement, even though it was a terrible grade. And when he said, “don’t be discouraged learn from it” You know, stuff like that helps. It’s like, okay, I’m really upset about it but [Lan] thinks I can do better. So maybe I can, you know.The greatest and obvious distinction between what was written on Sean’s and Lynn’s first paper is that Lynn received a response that spoke to her as a writer and not about her writing. Lan’s words in the P.S. statement acted as strong motivational facilitator and referenced a previous discussion about tutoring, which added positive value to her discouraging letter grade. During the pivotal second paper, Lynn again received comments that contributed to her motivation as a writer. In addition, an extra + in her B++ grade, Lan had written, “A very good job, [Lynn].” Again, Lynn was able to quote this comment during our one-on-one interview as she corrected my paraphrasing of the quote. Lynn said the comment alone made her feel “a lot better” and that it gave her a “boost of confidence,” something that I argue Sean never received. In addition to these comments, as I have previously mentioned, Lynn continued to receive personal feedback from Lan outside of the classroom or from evaluations. During our one-on-one interview, she told me that she frequently sought feedback from Lan through office visits and email. Because of this interaction, Lynn said, “I think he knew I was discouraged, and I was stressing over the class. I think he knew [responses] would boost me up and help me do better. Maybe he just pinpoints two people who need that.” Unfortunately for Sean, he did not receive the same boost. He was not the other person. As a result Lynn received the second highest overall grade among the participants while Sean received the lowest. While I concede that there are many variables that affect levels of commitment, motivation, determination and success when it comes to students outcomes in a writing class, it is clear to me that Lan’s responses and engagement played a significant factor in the semester outcomes of Sean and Lynn: two very similar students. At the very least, no argument can be made against the fact that Lynn had access to motivational facilitators that Sean did not. Consider for a moment how Sean might have reacted had Lan directly addressed Sean’s negative beliefs with a positive comment in the margins of his third paper. Had Sean read a comment such as, “don’t let another C discourage you,” accompanied with a mention of a few successes, might Sean have had a different reaction to the second C letter grade found on his third paper? Is it possible that Sean may have been motivated by a personalized comment? Had Lan the opportunity to have foreseen Sean’s dramatic drop in motivation, he may have taken the time to note the improvements Sean had made with the third paper. Instead, the only comments on Sean’s paper are “Logical flow?” ”Combine these two sentences,” and “Why just athletes, [Sean]?” accompanied with circles around grammatical and mechanical writing concerns; however, these comments may help signal ways of improvement Sean should consider with his writing. They do nothing to convey the idea that Sean had the ability to actually improve nor do they lend any facilitators of motivation. Grades: How they motivated the participantsMost of the students in this study were often more motivated to avoid a low letter grade, including a C, than by the positive reinforcement found in gaining a higher grade such as an A or a B. Other than Sarah, all of the students in the study seemed to align themselves with Jack as he said, “If I got a C or D, I’d put more effort in [writing]…. Maybe I need a C. I need a C to motivate me.” In fact, the reward of an A paper may have hurt motivation levels considerably. In one instance, Jack had received an “A” letter grade, but forgot to attach a blank rubric to his final draft, a mistake that caused his grade to drop to B. Jack said, “I can’t believe I had an A! That would have made it worse though. It’s probably good I got a B again.” In another example, Jackie, the only participant who received an A on the first paper, had a steady decline in scores for the rest of the semester. One reason grades work in this way may be because students closely associate letter grades with defining labels of self that often evoke feelings of fear and pain. In other words, the negative effects of a low grade are much stronger than the positive effects of a high grade. In the first interview, I asked the students what a grade means to them. All of the responses involved emotions. Lynn said, “I cried about this grade. I’m not gonna lie,” and many other students admitted to becoming angry over grades. Jack simply said, “It’s horrible.” I then followed the question by asking the students if our emotional involvement meant that a letter grade was something more than just a letter marking the value of the paper. Lynn, Kimber and Sandra immediately answered “Yes” with Sandra adding “it is all our hard work. You put everything in that paper. If it’s not good enough, then you’re not good enough.” What Sandra is describing is a direct connection between a letter grade and the individual who earned it. After all, it is not rare to hear someone describing another or even themselves as a “B (or) A student.” Take Victoria for another example: As a non-traditional student, she had her concerns about college-level writing, but she finished the semester with the second highest grade in the class (shared with Lynn). However, Victoria had her ups and downs. During the second interview, after she had received a C letter grade, she said, “I have to reevaluate me as a student and figure this out….. Cause I’m not that person.” In fact, four other students closely associated a letter grade with either their person or their abilities as a student of writing. Jackie said, “I’ve always been a straight A student,” Sandra called herself an “85 student,” Jack saw himself as a B student, and, as previously mentioned, Sean frequently referred to himself as a C writer. What these comments tell me is that when composition scholars discuss grades and evaluation, they are discussing far more than a method of measuring the success of a paper. We are, in many ways, measuring the value of the author of that paper. Therefore, in discussing the motivation behind a letter grade, we must also be aware that much of what we are discussing is the identity of students. Since students view grades synonymously with identity, then emotions such as fear, shame, and vulnerability as well as pride and confidence are also attached to the subject. In other words, grades act as powerful motivators with self-efficacy. In discussing the story of the participants of this study, I cannot ignore the fear aspect of their experience. In five interviews, fear of grades came up in every one of them. Sandra said that the first paper “really scared” her as she initially thought the writing for the semester was going to be “easy” until she saw her first grade. Kimber added by saying, “I was scared through the whole thing.” However, the best example of how fear was rooted in the participants can be seen in Jackie. Jackie found the semester troublesome to the point of tears during the last group interview. She said, “I was so scared I was going to fail this class. When I got my 90 percent, I was really surprised.” Although Jackie received the A letter grade on her first paper, her fears did not subside. A moment later she said, “I’m scared about the [final] paper.” This comments exhibits how, as the semester progressed, Jackie’s fears grew as her grades tapered down to a 75 percent. It was with this C letter grade that Jackie began to cry during the interview. She said, “I wanted to be a better writer, but it makes me feel that I’m worse.” As her grades slipped, Jackie began to believe that she was becoming less capable of writing well. She described feeling as though she no longer knew how to write, even though she had been a self-described “A Student” prior to the semester. By this point in the study, Jackie had received one A, two Bs and one C, yet these grades were enough to scare her into believing that she no longer knew how to write. Just before she broke down she said, “I don’t know how.” What this evidence shows is that grades, to Jackie, were more than extrinsic motivators. An A letter grade is different from a pay check, a prize, or a high score on the familiar game application Candy Crush. Since students attach fear to receiving grades, grades become more significant. Generally speaking, one doesn’t normally say they are scared to receive a pay check, or fear losing the prize, or are horrified by the outcome of scoring lower than average on a video game. The results in this study show that grades are a significant factor in branding our students. Therefore, the connotations grades have within social structures that Boyd identifies are still alive and well.The reason that students feel fear with grades may be because they see grades as a significant part of their identity as grades label their performance on papers that consist of a part of them. Consider what Jack had to say on this subject: [I]n a composition class, if you have great information, you are not necessarily gonna get a good grade. I guess if you are just yourself you will get a D or whatever, but still you have to be yourself more than any other class. I beat myself up. I’ll beat myself up over an English class more than any other class.Jack’s words show another example of merging self with a letter grade. In this instance, a sadder image is displayed as Jack believes his person warrants a low letter grade, which suggests that Jack has to perform a different self through his writing in order to become the “B student” he referred to. His perceived self is a “D student.” Yet, Jack also believes that composition instructors look for genuineness and authenticity in writing, which creates a to be or not to be paradox for the student writer. It seems to suggest that when Jack receives a low grade on a paper, he believes he exposed too much of himself in his writing. A letter grade of D marks Jack; a letter grade of A marks some other self. Therefore, composition grades are not only increments of value used to establish or rate performance levels, they are an important factor in students’ identity and ability especially with lower letter grades. Furthermore, students frequently compared lower letter grades to a sense of physical pain to lower letter grades. Jack’s mention of how he “beats” himself up over composition grades prompted me to look at other statements that were similar. Victoria had mentioned that she was “kicking herself” about performing at a lower than desired level. She also mentioned that the grade of a C “kicked [her] down pretty hard,” and that it felt like a “kick in the gut.” In addition, five students used the phrase “it hurts” to describe the effects of letter grades. With this said, it should come as no surprise that students are motivated largely by avoiding lower grades such as an F, D or even a C, than they are by receiving an A. The pain associated with lower-level performance is a greater motivator than the reward of a letter grade. Communication: Differing Perspectives of Evaluation. Overall, there was a serious misunderstanding between Lan’s intentions with evaluations and the students’ perceptions of those evaluations. I have already discussed some of these issues in regards to the rubric; however, there are other communication and perspective issues to consider. For example, during my post-semester interview with Lan he said, “I find my students pretty much driven by grades and scores.” Through his teaching experience he has accumulated much evidence that formed his conclusion including his own student experience with evaluations. However, as discussed above, the students’ drive in my study seems to be much more complicated. From Lan’s perspective, evidence of students’ drive to perform for a grade is overwhelming. For example, Lan noted how two weeks before the end of the semester “half the class” emailed him wondering how close they were to “getting an A or a B.” In addition, Lan remembered when one student from the study received a B on the first paper and then a low C on the second paper. He said, “this person was almost in tears. I had to make a point to come into the office and talk to her for an hour on the phone, and half that conversation was “settle down, it is going to be ok.” Furthermore, Lan’s own experience as a student adds to his philosophy. He said comments he had received as a student didn’t mean that much to him: “I would just look for a grade. Then I would look back at the comments.” If he got a B+ on a paper with comments like “Nice observation” or “Well stated,” the comments would only confuse him. The comments “seemed like a joke,” which would lead him to ask, “How can I get all this praise and then get a B+.” Lan has also seen this same response with his students. He said, “Before I started using grading rubrics, by and large, I would take time on a paper to write two or three things about why I felt they did really well. Then I would write major areas of improvement. Then I would give it a score.” With this evaluation method students would tell him, “If I’m doing all of these things so well, why am I getting a C. If you took a paragraph to write about what you like, and a paragraph to write what you didn’t like, doesn’t that get a B?” This response in comparison to his own reaction to evaluations lends a great deal of evidence towards Lan’s evaluation philosophy.His perspectives on evaluations have led him to incorporate a grading rubric. He mentioned during our interview that he realized that the positive comment approach “really didn’t give [students] a whole lot of insight or direction or specific guidelines on how to improve their writing,” implying, perhaps, that the rubric does give insight and direction. Furthermore, since implementing the use of the rubric, Lan has never had “students disagree with their grades,” which was something that he said he spent a lot of time doing before. Lan’s conclusion, then, was that the rubric was successful. “It feels like it breaks it down for them,” he said. From his perspective and accumulated experiences, his philosophy seems logical.On the other hand, Lan has observed that “there are some disadvantages to the rubric.” For example he described two scenarios: “You could have a student that has really thoughtful ideas and has the ability to manipulate the English language, but has twelve comma splices.” In addition, “with introductions especially, I’m looking for it to have certain things, and sometimes it works without those parts listed on the rubric. Then I have to make a decision.” Other than those drawbacks, Lan seems confident with the use of the rubric as an evaluation method.Lastly, Lan mentioned during the interview an aspect of value that I find particularly interesting in regards to writing assignments. When describing the phone call with the student in tears, he said that he let the student know that a paper was “just a snapshot” that doesn’t reflect who a student is as a writer. Comparing Lan’s perspective of evaluation and value to the perspectives I have gathered during the study, there are apparent differences. First the student “drive” Lan mentions is far more complicated than having to do with grades. Motivational facilitators come in many forms as seen in Sean, Lynn, Jack, Victoria, and even Sarah. Second, the rubric does not seem to offer insight or direction the way Lan believes, as almost all of the students in the study seemed somewhat confused by the rubric at best. Third, evaluations meant far more to the students than a justification of a grade as seen in the reactions of all the students mentioned here. Finally, the students rarely, if ever, saw an evaluated paper as “just a snap shot.” These final two discrepancies can be observed by the tears, frustration, and the physical pain associated with evaluations on a single paper assignment. Overall, the most striking finding in this study is that the miscommunication or disagreement with how the evaluation process is supposed to work or is working is mostly avoidable. While most of the students in the study seemed to wonder why a certain evaluation method and score happened to them, Lan believed he was providing a significant tool of improvement to his class. From what I observed during the study, only three students – Lynn, Sarah, and Victoria – made the effort to communicate further about their evaluation in order to gain a better sense of how to use them. Although Lan seemed to greatly appreciate these students’ efforts, he made little to no extra effort to communicate with students like Sean or Jack. In fact, his evaluation methods, which aimed at grading rather than responding, may have discouraged other students to make the same efforts Lynn, Sarah and Victoria made. Moreover, in some ways it seemed the greater the silence from students, the greater the evaluation method was deemed successful by the instructor. The three students who did make the effort to communicate better with Lan earned the top three grades for the semester. ConclusionThis study found evidence that responding to students’ writing plays an integral part in writing development as it is one of a number of motivational facilitators that teachers have at their disposal that may help determine students’ success. Responding to students through the context of their writing is a powerful way to facilitate motivation through evaluations. However, teachers of writing do not always consider the rich and complex purpose of personalized and informative response. Much like Sommers, through this study, I found that a lack of thoughtful responses limits students’ experiences with writing exercises and reduces the potential for any contribution to motivation, be it controlled or autonomous. Most of the participants of this study exhibited to me that they were missing, detailed information, personal engagement in writing, and a discussion about their authorship from the evaluation process. The instructor’s evaluation methods – brief directive comments mainly contained by a rubric – did not facilitate much in the form of information or motivation. Unless the students made the initial effort to seek help outside of the classroom, the students in this study were left to meander through writing difficulties with very little personalized direction. The letter grades such as those used to mark the overall value of the paper or those used on each category of the rubric were the main providers of information, which is possibly the lowest form of motivation through evaluations.On a more personal and anecdotal level, there is even a greater lesson to be learned. After my first interview with the students, an interview that took 30 minutes of my time, I felt I had learned a wealth of information that was pivotal to the students’ success. I was excited by the information I gained. I was so excited that for a brief moment I wanted to convey all that I had learned with Lan in order to help him teach his class about their own writing. I weighed the consequences of undermining my own study in order to help Lan address the participants’ concerns. I thought if Lan knew what I did, then he would have an extraordinary tool of insightful information, which I believed was crucial to the students’ education. I decided that there was no ethical way for me to inform Lan about the results of the interview. However, from the information gathered in the first interview, I concluded that listening to students discuss their concerns with evaluations provided information that I did not want to teach without. Overall, I discovered that 30 minutes of time devoted to listening to students about evaluations maybe the best investment a teacher can make during a single semester. Therefore, I argue that we communicate with students about our evaluation methods and not just through them. By communicating with students, teachers will undoubtedly discover helpful information regarding how to or how not to evaluate students’ writing. As each semester brings a new and unique mix of student writers, we must have contingencies for modifying our evaluation methods. Students like Jackie who wanted “more positive stuff” may need more praise and reward as they have been conditioned to expect it. Students like Sean may need more personalized attention as they may want to improve from a “C writer” but have solid evidence from teachers that they cannot. Students like Sarah may be steeped in autonomous motivation and will do well regardless of how we evaluate. Students like Jack may be oddly aware of how lower grades challenge them to work harder and may begin to slack after receiving an A. These results are likely only the tip of the evaluation iceberg.This is not to argue that each student requires a completely different style of evaluation – although individualized attention is crucial – it is to argue that studies like mine are valuable to the extent that it shows what teachers can learn by listening. Motivation comes in all forms, and we may have a plethora of ways to facilitate it. Although autonomously motivated students – like Sarah – may be the ideal student, it may not be possible to instill such characteristics in one semester or year. Instead, I argue – like Shaughnessy from decades earlier – that we become students of our students particularly within the conversation of evaluation and motivation. There are a number of ways teachers can start this topic of conversation in a writing class. The first is to do what I did with this study. The day of or the day after grades are returned to students, teachers can address the topic as a classroom discussion or during individual conferences, preferably near the start of the semester. If a class discussion is preferred, I recommend that teachers switch classes with a cohort to interview the students. Doing the latter might provide students a more comfortable setting and may curb the potential for awkward confrontations. Interviews with stand-in teachers can be designed to keep student replies anonymous, which may help students to open-up about the topic of evaluation and thereby provide sincere feedback to teachers. The second is to survey the class or portions of the class as Kristin Kogel Gedeon discusses in her essay “All I Did was Ask: Communicating with Students about Their Writing.” Gedeon calls this method “teacher-research,” which has a nice Shaughnessy-like ring to it. In these surveys, teachers can create their own set of questions that interest them or that apply to issues particular to the class. For example, Gedeon uses a direct approach with questions such as, “What kind of responses do you want from me?” and “Do you think my responses will help you revise?” The third is to assign short writing responses to evaluations. Much like Elbow’s idea to assign students to write a cover letter describing their writing experience during the project, teachers can assign a response to evaluations as an essay. Denice Blake advocates this technique. Blake has her students write a response to her responses, which allows her students to agree, disagree or “expand” on her remarks (88). In addition, teachers can direct students to copy one or two responses that they believe are critical to the evaluation and then paraphrase the response. Students can also be directed to describe what actions they will take to address the response. The context of the essays will help the teacher gage how well a student understands the responses, and it may show how well the teacher’s evaluation philosophy is being received by the class.The fourth technique or suggestion is to assign an Reflection on Evaluation essay. Much like a literacy narrative, teachers can assign students to write about their past experiences with grades, comments and/or responses. This essay might prove to be extremely helpful especially if it is assigned near the start of the semester. As teachers are still learning about their students in the early stages of a semester, this short essay may provide some eye-opening information. Teachers may learn who the previous writing instructors were of each student, what strategies those teachers used, which of those strategies were effective and which ones to avoid, and what misconceptions of evaluations might be best addressed through class discussion or one-on-one conferencing. For example, through this essay, a teacher (as I did) may discover that a student in the class has been homeschooled and has some anxiety about what it means to have an essay evaluated by someone other than her mother. With this information, a teacher can explain and assure the student that evaluations are meant to be helpful not critical. The fifth is to follow Spandel’s advice to have students design grading rubrics. Defining what constitutes good writing is difficult for anyone and makes for a challenging task for students. When students design a rubric, they may become more aware of the questions they have with writing and what a teacher looks for in writing. Students may even take ownership of the standards listed in a rubric if they believe they took part in its design. In any case, students will become more familiar with the theory and genre of the grading rubric. Another approach is to design a unit on grading and evaluating students’ writing. During this unit, students can do any or all of the suggestions above including grading anonymous papers, or, as in my case, the students can evaluate the teacher’s writing. This is where I have found success, as I mentioned in the introduction. By spending a couple of weeks reading, writing, discussing, researching and other multimodal avenues of engagement, classes can engage in real knowledge based debates on what it means to evaluate and have writing evaluated. The objective on the unit is to write a full essay on the topic of evaluation. For example, I assigned the students to write an essay describing the problems of college-level composition evaluations to a panel of English faculty members. Their goal was to provide an accurate and fair assessment of common evaluation methods they have experienced and ways they thought evaluations could be improved to help them become better writers. After the assignment, the class and I discussed their results, and I implemented many of their suggestions into my evaluation methods. By communicating our goals through evaluations in these ways, and by listening to students’ concerns with receiving evaluated papers, we create a more transparent learning environment, we lower the chances of harmful miscommunications within evaluations, and we build a better community of learners within the classroom. These practices may also reduce students’ desires to contest grades, which, as this study shows, can be a teacher concern.Although a part of me wishes that my thesis would change our grade-centered culture to a learning-centered culture, I concede that I am relatively powerless when it comes to redesigning the western world’s addiction with grade point averages. Instead, within English departments, a more comprehensible battle can be won with how we engage in evaluations with student writers. I believe composition teachers already have control over aspects of evaluation that can make a difference in how students value the evaluation process, the most important being communication. Therefore, without redesigning the wheel, wobbly as it may be, minor changes can be made that may have a huge impact on the motivation of student writers. Through this study, I have found that students have a basic desire to focus their attention on learning, but, in many ways, we have taught them out of it by placing our own emphasis on the grade. By recognizing how grades, comments, and responses communicate with students, we can begin to refocus students’ attention to learning to write. Although grades maintain a strong grasp on the student psyche, with careful consideration and well-written, informative responses designed through classroom discussions, students may become conditioned to expect higher levels of motivation through feedback. In fact, they may already value our responses to writing over the letter grade; we may do better by showing that we feel the same. Works CitedBlake, Denice A. “Have They Read What I Said? The Final Check.” The English Journal83.4 (1994): 88–89. JStor. Web. May 1, 2014.Boyd, Richard. “The Origins and Evolution of Grading Student Writing.” The Theory of Grading Writing: Problems and Possibilities. Ed. Frances Zak and Christopher C. Weaver. New York: State University of New York Press, 1998. 3–17. PrintBrodkey, Linda. “Writing Permitted in Designated Areas Only.” Higher Education Under Fire. Ed. Michael Berube and Cary Nelson. New York: Routledge, 1995. 214–37. Print.Brown, Brene. Daring Greatly: How the Courage to Be Vulnerable Transforms the Way We Live, Love, Parent, and Lead. New York: Gotham Books, 2012. Print.Deci, Edward, and Richard M. Ryan. “Extrinsic Rewards and Intrinsic Motivation in Education: Reconsidered Once again.” Review of Educational Research. 71.1 (2001): 1–27. Academic Search Complete. Web. 1 May. 2014.Deci, Edward and Richard M. Ryan. “Intrinsic and Extrinsic Motivations: Classic Definitions and New Directions.” Contemporary Educational Psychology 25 (1999): 54–67. University of Rochester. Web. 1 May. 2014. Deci, Edward and Richard M. Ryan. "Self-Determination Theory: A Macrotheory of Human Motivation, Development, and Health." Canadian Psychology 49.3 (2008): 182–185. Academic Search Complete. Web. 1 May. 2014.Doncan, Tony N. “Positive and Negative Incentives in the Classroom: An Analysis of Grading Systems and Student Motivation.” Journal of Scholarship of Teaching and Learning 6.2 (2006): 21–40. Academic Search Complete. Web. 1 May. 2014.Elbow, Peter. “Ranking, Evaluating, and Liking: Sorting out Three Forms of Judgment.” College English 55. 2 (1993): 187–206. National Council of Teachers of English. Web. 2014.Gedeon, Kristen K. “All I Did Was Ask: Communicating with Students about Their Writing.” Language Arts Journal of Michigan 25.1 (2009): 50–57. Web. 2014.Haswell, Richard. “The complexities of responding to student writing; or, looking for shortcuts via the road of excess.” Across the Disciplines 3 (2006): Web. 1 May. 2014. Ketter, Jean S, and Judith W. Hunter. “Student Attitudes toward Grades and Evaluation on Writing.” Alternatives to Grading Student Writing. Ed. Stephen Tchudi. Urbana, Illinois: National Council of Teachers of English, 1997. 103–121. Print.Knoblauch, C.H., and Lil Brannon. “Teacher Commentary on Student Writing.” Key Works on Teacher Response. Ed. Richard Straub. Portsmouth, NH: Boynton/Cook, 2006. 69–76. Print.Kohn, Alfie. Punished by Rewards: The Trouble with Gold Stars, Incentive Pans, A’s, Praise, and Other Bribes. Boston: Houghton Mifflin Company, 1999. Print.Odell, Lee. “Responding to Responses: Good News, Bads News, and Unanswered Questions.” Encountering Student Texts. Ed. Bruce Lawson, Susan Sterr Ryan and W. Ross Winterowd. Urbana, Illinois: National Council of Teachers of English, 1989. 221–235. Print.O’Neill, Peggy, and Jane Mathison Fife. “Listening to Students: Contextualizing Response to Student Writing” Key Works on Teacher Response. Ed. Richard Straub. Portsmouth, NH: Boynton/Cook, 2006. 190–203. Print.Pink, Daniel. Drive: The Surprising Truth about What Motivates Us. New York: Penguin Books, 2009. Print. Sommers, Nancy. “Responding To Student Writing.” Key Works on Teacher Response. Ed. Richard Straub. Portsmouth, NH: Boynton/Cook Publishers, 2006. 287–295. Print.Sommers, Jeffery. “Enlisting the Writer’s Participation in the Evaluation Process.” Key Works on Teacher Response. Ed. Richard Straub. Portsmouth, NH: Boynton/Cook Publishers, 2006. 328–335. Print.Smith, Sommer. “The Genre of the End Comment: Conventions in Teacher Responses to Students Writing.” College Composition and Communication 48.2 (1997): 249–268. Web. 1 May. 2014.Speck, Bruce W, and Tammy R. Jones. “Direction in the Grading of Writing? What Literature on the Grading of Writing Does and Doesn’t Tell Us.” The Theory of Grading Writing: Problems and Possibilities. Ed. Frances Zak and Christopher C. Weaver. New York: State University of New York Press, 1998. 3–17. PrintSperling, Melanie and Sarah Warshauer Freedman. “A Good Girl Writes Like a Good Girl: Written Response to Student Writing.” Key Works on Teacher Response. Ed. Richard Straub. Portsmouth, NH: Boynton/Cook Publishers, 2006. 112–134. Print. Straub, Richard. “Students’ Reaction to Teacher Comments: An Exploratory Study.” National Council of Teaching English 31.1 (1997): 91–119. Web. 2014.Tchudi, Stephen. “Introduction: Degrees of Freedom in Assessment, Evaluation, and Grading.” Alternatives to Grading Student Writing. Urbana, Illinois: National Council of Teachers of English, 1997. Print.White, Edward M. "Bursting The Bubble Sheet: How To Improve Evaluations Of Teaching." Chronicle Of Higher Education 47.11 (2000): Academic Search Complete. Web. 2013.Zak, Frances, and Christopher Weaver. The Theory and Practice of Grading Writing: Problems and Possibilities. New York: State University of New York Press, 1998. Print. Appendix A321945201930The purpose of this survey is to learn how college-level writers feel about their own performance with writing assignments, what level of importance do they apply to teachers’ feedbacks, grades and evaluations, and if they are interested in being a part of a further, more detail study.The results of this survey will be compiled into a research article to be submitted as a graduate level thesis at Eastern Illinois University. The research article may also be submitted to be published. . Your participation is completely voluntary, and your identity and organization will be confidential. By completing this survey, you are giving your consent to participate in the study. The survey should take roughly 10 minutes to complete. 00The purpose of this survey is to learn how college-level writers feel about their own performance with writing assignments, what level of importance do they apply to teachers’ feedbacks, grades and evaluations, and if they are interested in being a part of a further, more detail study.The results of this survey will be compiled into a research article to be submitted as a graduate level thesis at Eastern Illinois University. The research article may also be submitted to be published. . Your participation is completely voluntary, and your identity and organization will be confidential. By completing this survey, you are giving your consent to participate in the study. The survey should take roughly 10 minutes to complete. Beginning Semester Survey(Beginning Semester Survey)I am a strong writer? Agree 1 2 3 4 5 6 Disagree I am comfortable with writing college level papers?Agree 1 2 3 4 5 6 disagree3. I am anxious beginning this writing class?Agree 1 2 3 4 5 6 DisagreeWhat Grade do you believe you will receive in this class? F D C B A 5. Teacher’s comments are important to my writing?Agree 1 2 3 4 5 6 Disagree How important to you is your grade in this class? Not Important 1 2 3 4 5 6 Extremely Important Are you interested in being interviewed periodically during this semester?No YesAppendix B342265243205The purpose of this survey is to learn how college-level writers feel about their own performance with writing assignments, what level of importance do they apply to teachers’ feedbacks, grades and evaluations, and if they are interested in being a part of a further, more detail study.The results of this survey will be compiled into a research article to be submitted as a graduate level thesis at Eastern Illinois University. The research article may also be submitted to be published. . Your participation is completely voluntary, and your identity and organization will be confidential. By completing this survey, you are giving your consent to participate in the study. The survey should take roughly 10 minutes to complete. 00The purpose of this survey is to learn how college-level writers feel about their own performance with writing assignments, what level of importance do they apply to teachers’ feedbacks, grades and evaluations, and if they are interested in being a part of a further, more detail study.The results of this survey will be compiled into a research article to be submitted as a graduate level thesis at Eastern Illinois University. The research article may also be submitted to be published. . Your participation is completely voluntary, and your identity and organization will be confidential. By completing this survey, you are giving your consent to participate in the study. The survey should take roughly 10 minutes to complete. Ending Semester SurveySummer 20131.How much has your writing improved over the semester?No Improvement 1 2 3 4 5 6 Much Improvement2. How did your grades on each paper make you feel about your writing?Bad 1 2 3 4 5 6 Good3. How helpful were the instructor’s comment in improving your writing?Not Helpful 1 2 3 4 5 6 Very Helpful4.How strong of a writer are you now? Weak Writer 1 2 3 4 5 6 Strong Writer5. How comfortable are you writing college level papers?Not Comfortable At All 1 2 3 4 5 6 Completely Comfortable6.What Grade do you believe you will receive in your next writing class? A B C D F7.How important are teacher’s comments on your writing?Not at All 1 2 3 4 5 6 Very Important ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download