1.0 About this booklet - AHRECS - Leading Experts In Human ...



Contents01.00 About this booklet02.00 Case Study: May I video the class?03.00 Case Study: Do you like my students’ work?04.00 Case Study: How do you use data on students?05.00 Case Study: We lied to our students06.00 Case Study: Yes, Deputy Vice-Chancellor 07.00 Case Study: But what if it didn’t work?08.00 Case Study: How do you like our MOOC?09.00 Case Study: What’s in it for me?10.00 Case Study: Evading review11.00 Case Study: Life in the virtual world12.00 About the contributors13.00 References14.00 Acknowledgments15.00 Glossary1.0 About this bookletIn this booklet, we offer a collection of case studies of research in scholarship of learning and teaching (SoTL). For some case studies, we have added expert commentary from experienced researchers; we have left others as works in progress, so that, over time, we can develop further examples and commentary. Case-based approaches encourage ethical practice among researchers. By stimulating moral imaginations, they can help researchers recognise ethical issues, develop analytical skills, respect the points of view of other people, take personal responsibility for decision-making and negotiate with regulatory bodies. Our reason for offering these case studies is to address an apparent lack of experience with research ethics review. Unlike the other booklets in this collection, you need not read every case study. Instead, we invite you to read the summaries and go into more depth in the areas that interest you as well as those you are more likely to encounter.The scholarship of learning and teaching (SoTL) is a relatively young area of study. Like many emerging interdisciplinary fields, there may be gaps in the areas that it has addressed, a restricted sense of collective endeavour, and relatively narrow relationships with cognate fields of study. In addition, SoTL also has to cope with emerging frontiers as scholars grapple with new forms of teaching and learning, changes in the higher education landscape, emerging methodologies, as well as newer challenges faced by any researcher dealing with matters such as big data; internet-mediated research; human-computer interaction; globalisation and transnational networks (Israel, 2015).SoTL’s limited history of engagement with questions of research ethics has not gone unremarked in some countries. In the United States, Wilson (2008) called for more guidance on ethical issues to help SoTL researchers. In 2004, Burman and Kleinsasser noted ‘There is scant literature directly addressing ethical practice in SoTL’ (p. 68). Also writing in the Canadian context, in 2013 Stockley and Balkwill discussed their encounter with SoTL researchers at a Canadian conference ‘who, in our opinion, were completely unaware of research ethics guidelines that clearly applied to their work’. It should be noted that the Canadian approach to research ethics through national guidelines and local review closely mirrors the Australian experience.There may be several reasons for this collective failure to engage. First, this is a field with limited exposure to the more intensive forms of research ethics review. Most significantly, in the United States many forms of educational research have been eligible to be declared exempt from review by full Institutional Review Boards (IRBs) (DeVito, 2010). This has included work relating to regular instruction, comparison of educational methods, educational testing, anonymised surveys, anonymous interviews, and anonymous and public behaviour, as well as those studies employing existing and publicly available or anonymous data. In some countries, such as Australia, the United States and the United Kingdom, where research ethics review is only mandated for institutions that wish to be eligible for research council grants, tertiary institutions that are only concerned with teaching may not have review processes in place. This may pose problems for empirical scholars seeking to publish in journals that require evidence of research ethics review as a precondition for publication.Of course, limited review structures should not and have not eliminated the need for researchers to reflect on the ethics of their work. However, they may have reduced the need to document and justify decisions in order to meet the requirements of established institutional review. Exempting research from full board review has also meant that fewer members of IRBs in the United States turned their attentions to responding to the ethical questions that arise in this particular field. Even where research ethics review processes have encompassed SoTL, some research ethics reviewers may not be well-versed in SoTL practices. Few institutions have centres that are dedicated to the study of higher education and so SoTL researchers are less likely to be deliberately recruited to central or discipline-specific research ethics committees. Instead, SoTL researchers may have to contend with reviewers who sometimes erroneously overgeneralise on the basis of generic approaches to research ethics. For example, Kanuka and Anderson claimed that Canadian scholars employing qualitative approaches ‘to investigate e-learning have discovered that the application of existing ethical guidelines can sometimes result in confusion and uncertainty among both researchers and ethics review board members’ (2007, p. 2).It is a common complaint among social scientists that research ethics committees’ lack of understanding may compromise long-standing methodologies without achieving any additional benefit or protection for participants (Sikes and Piper, 2010). For example, Grayson and Myles (2004) argued that changes imposed by research ethics boards on surveys of Canadian students had had a detrimental impact on response rates, hypothesis testing and research costs. Stockley and Balkwill (2013) suggested Canadian SoTL researchers saw the process of seeking ethics review as ‘too confusing, too time-consuming, too onerous and/or not applicable to them’ (p. 2).Secondly, some SoTL researchers enter the field from disciplines with little familiarity of human research:there are academic staff who have very limited experience or awareness of the role of human ethics in research into learning and teaching, perhaps because their own discipline-based research training did not involve human subjects. (Chang & Gray, 2013, p. 150)For example, following a review of empirical research by law teachers in the United States, Scott DeVito (2010) concluded that since law ‘professors have little experience with empirical research; professors have little background knowledge about the ethics of human research’ (p. 292). It may also be true that some teachers enter SoTL without experience of research in their home discipline or even that the routine practices of teachers’ home disciplines might appear methodologically or ethically questionable when applied unreflexively to SoTL (Holcomb, 2002). For example, Gale noted that employing a ‘control group’ might be common in biology or biomedicine but, in an educational setting, ‘might compromise or distort the students’ experiences’ (2002, p. 39).Thirdly, it is possible that principlist approaches to research ethics, the basis for most national and international guidelines on research ethics, are particularly unhelpful when dealing with the kinds of power relations that SoTL researchers regularly confront in the context of higher education institutions. One of the added complexities associated with research in SoTL is connected to the relationships that scholars have with their students, colleagues and institutions, and so: ‘.?.?. in researching one’s own teaching practice in ways that include students and their work, it is impossible to separate the research from the relationships’. (Galguera, 2002, p. 56). Principlist approaches may also run counter to the dispositions of those researchers with commitments to, among other things, participatory, collaborative, critical, postmodern, and feminist research methodologies.Researchers in SoTL also encounter ethical challenges that are specific to the context within which they operate. They may have to confront these challenges with limited support from the broader SoTL enterprise, with limited expertise on ethics review bodies and with little experience in this form of human research. In her introduction to one of very few collections on this topic, Pat Hutchings (2002) invited SoTL researchers to prepare themselves for the ethical challenges that were bound to arise in their field: ‘that ethical issues should arise in the scholarship of teaching and learning is not to suggest that something is amiss. On the contrary, attention to ethics is something we expect as a field of study or practice evolves and matures’ (p.?2).In this booklet, we add to the existing narrow range of SoTL-specific materials by constructing case studies and, in some cases, inviting researchers to provide expert commentary. Some of these cases do pose difficult problems – few SoTL researchers will encounter all of them and many researchers may find that the research ethics review process might help them refine their research plans, and anticipate ethical challenges that they may encounter in their research. Indeed, the point of the case studies is not to scare SoTL researchers but to help them identify and work through these matters before data gathering starts so that they are better prepared to grapple with challenges in the field.The strategy for creating the cases has been straightforward and follows a pattern we have used in earlier work. In Israel and Hay (2006), for example, the authors explained the nature of the project to a group of social scientists and ethicists from a range of disciplines and jurisdictions and invited them to help construct and comment on one case. Once Israel and Hay had crafted the responses into a set, authors were invited to edit their contributions.Our requirement for brevity compelled these experienced commentators to identify and focus on key matters .?.?. rather than working stolidly and step-by-step through a routine of identifying issues and options, considering consequences, evaluating options in terms of moral principles, deciding and acting with commitment and evaluating oneself and the systems of which one is a part, we reflect the sometimes conflicting or different emphases given by our commentators .?.?. in some instances, there may be equally compelling justifications for different responses to identical situations. The cases also point to the value of discussing critically those ethical dilemmas we encounter, to clarify our own thinking and gain insights to alternative moral positions (2006, p. 146).Back to contents2.0 Case Study: May I video the class?The researchers wanted to use participant-generated video to explore the experience of design students at university. Students would be invited to use a camera to record aspects of their life at university. They were keen to ensure students did not feel coerced to participate, or to falsify or distort data to appease university staff or the researchers.The researchers agreed that: no student involved in video data gathering would be identified to the faculty; students would be paid for the work that they were doing; the final report would be distributed only to senior staff in the faculty, so that no lecturer could guess the identity of the students; lecturers would be notified in advance of the study and of the possibility that students might approach them for permission to interview them or film in classes.Should the identity of those who are filming and those who are being filmed be protected and, if so, how would you do this?Whose consent might be needed?Should students be paid for this work and, if so, at what rate?What might the impact be on students who were unwilling to be video recorded in class? How might you minimise risks to them?Keywords: participant-generated video; anonymity; consent; compensation; insider research.This scenario is adapted from: O‘Toole, P 2013, ‘Capturing undergraduate experience through participant-generated video’, The Qualitative Report, vol. 18, no. 66, pp. 1–14. Available at: what extent do you agree with the following analysis?There has been renewed interest in videoethnography as a research tool within the social sciences (Heath et al., 2010). It offers an opportunity to give voice to youth (Wood and Kidman, 2013) but raises some interesting ethical issues. Scholars may need to: identify the potential uses to which images may be put; negotiate whether participants will be offered anonymity; consider whether to obtain consent from incidental people who enter the frame; negotiate access to those parts of the institution controlled by professional groups that are not participants in the study; maintain data security and control secondary use of data. The ‘challenges of doing visual ethnography that is ethical and appropriate should [be seen as] opportunities to work with participants and others to create ways of working ethically and appropriate’ (Pink, 2013, p.69). So, to what extent do you agree with the following analysis?Who consents?The recordings are likely to incorporate three groups: the participants who agree to generate the video; other people whose interactions are relevant to the experiences being documented; other people who might be caught on camera as an incidental part of the filming. The nature of the consent process for each group is likely to be different and, indeed, may not be necessary at all for the last group.In the study that formed the basis for this hypothetical, O’Toole (2013) used videoethnography to explore the experiences of Australian science students. She decided that ‘Uninvolved bystanders – members of this group were those who were captured by the student-researchers on digital media, but who were incidental to the study’ would not be part of data analysis. She argued that their consent was not required as ‘These people were not recruited nor invited to be part of the research study’ (p. 5). In other observation studies, some researchers have chosen to inform users of a site that filming is occurring, that they might be caught on camera and that the film is likely to be used in particular ways.Harms and benefitsWithin videoethnography, it is often assumed that participants should not be identifiable and some research ethics committees have mandated re-anonymisation (Wood and Kidman, 2013). However, this may be neither possible nor desirable. Instead, the question of whether participants might be identified should be seen as an outcome of negotiations around consent. This will be influenced by an understanding of possible harms and benefits that might flow from identification as well as an understanding of how these possible harms and benefits might be distributed over the student cohort as well as among academic staff.It may make a difference to the class and to lecturers whether the recording lasts for a small part of one lesson or for the whole semester. People who do not want to be recorded may have little issue with moving for 30 minutes to an area of the room that is not being filmed. Quite understandably, they may be more troubled by being shunted into the corner and told not to move for an entire semester. If the recording will continue over a longer period and include more public spaces, there may be a higher probability that people who have not provided consent will enter the frame of recording.Empowering participantsThe researchers should consider having a discussion with participants about all these issues, raising those matters that concern her, and seeing what concerns participants raise. This is exactly what O’Toole (2013) did and the conditions that we have described in the hypothetical scenario largely stem from her discussions with students.Negotiating consent with students should be about empowering participants and respecting their decisions collectively and as individuals. Acting as a commentator on this case study, Gary Allen suggested:To mitigate against peer group pressure, the researcher might reinforce that participation is completely voluntary, that individuals should email her separately to express their wishes with regard to the video, and that she won’t disclose what individuals have decided. Then she might help participants set up the room and the ‘frame’ of the recording so it only includes those who have given their consent to be recorded. Such a discussion could be a powerful modelling of how to approach consent, a good way to demonstrate respect in practice, and should make it clear that the researcher is serious about respecting participants’ wishes. Once there is a rough edit of how the excerpts might be used it might be appropriate to reconfirm the?consent of the students and lecturers who actually feature in an excerpt. (Gary Allen)Gary Allen’s comments indicate the possible value of enabling a dynamic and continuous negotiation of consent, including one that might allow participants to withdraw from a study (i.e. so that there will be no further recordings of them made from that point).Disseminating resultsThe study described in this hypothetical might offer a number of benefits to future students, but the way that it has been set up might limit the possibility of some of these being realised. For example, the recording and report might also be a valuable resource for training purposes and a strong candidate for publication. The experiences of these students might turn out to be worth documenting and publishing. If this is a possibility, the methodology, consent and plan for publication may need to be altered. So, it would be sensible to think carefully through these possibilities before setting up research that could never be disseminated beyond a small group of people.It is important to delineate between the video recording and the report or any other outputs arising from the work. It may be possible to remove personally identifying information from the report, which might allow a wider distribution. However, researchers ought to consider the potential for identification by inference and especially internal identification (Tolich, 2004).In addition to consent for the research, if there is a plan to use excerpts of the recording in a conference presentation, resource or other output, participants may need to sign a ‘creative release’ form. Universities tend to create such forms for the use of images of students in marketing materials and the form should be available from a university’s central marketing or media departments.Paying compensationFinally, many researchers offer to pay compensation to participants for the time that they spend on a research project. It is important that the amount offered should not be so high as to make it difficult for participants to refuse. Dickert and Grady (1999) argued research participants might be offered a just and fair wage comparable to the amount that they would receive for similar work elsewhere. Some students may spend large amounts of time on campus and may also take on part-time jobs to pay for their studies. O’Toole decided to pay an hourly rate to science students, calculated on the basis of the amount of film footage that they submitted to her. However, it took some time to agree the rate with the HREC and this delayed the start of her project.Back to contents3.0 Case Study: Do you like my students’ work?At a scholarship of teaching and learning conference, an educational developer intended demonstrating a new technique for student assessment. She wanted to concentrate on the teaching techniques not empirical research but did want to show the audience actual examples of student work (exam scripts, essays and oral presentations) from her institution and how these were handled by her learning management system.Is this human research?Is HREC approval required?Do students need to agree to allow their work to be used for this purpose? If so, how might you avoid the possibility that they might feel coerced into participating?What risks might need to be mitigated, and how? Does it matter whether the examples are of strong or poor student work?Keywords: consent; intellectual property; confidentiality; harmsThis scenario is adapted from: Stockley, D & Balkwill, L-L 2013, ‘Raising awareness of research ethics in SoTL: The role of educational developers’, The Canadian Journal for the Scholarship of Teaching and Learning, vol. 4, no. 1, Article 7. Available at: what extent do you agree with the following analysis?This work is human research. The researcher needs to consider carefully the degree to which the students could be identified (if only by their peers and other teachers) and the potential for humiliation or other risks. Consent must be obtained for the use of their work in research and that consent should anticipate the kinds of ways it will be used, the audience and means of distribution. There is some potential for social and professional risks; these become more serious if the work is of a poor standard and will be used to illustrate mistakes. The presence and severity of the risks is likely to depend upon the identifiability of the student. Not seeking prior consent reveals a lack of good faith by the teacher-researcher and potentially both a breach of research ethics and research integrity. The consent mechanism should safeguard the voluntary nature of participation. So, for example, it might be possible to seek consent after the grades for the semester have been issued and use a third party to seek consent.Back to contents4.0 Case Study: How do you use data on students?Your university has recently upgraded its learning management system which will enable a larger set of learning analytics to be made available to different parts of the institution. This set might also be linked to other data sets. The Student Association and the academics’ union have asked the university to clarify what data might be made available, under what circumstances, in what form and for what purposes.What would be good practice?What ethical issues exist in relation tocollection of the data?the interpretation of data?storage of the data?Keywords: learning analytics; data linkage; data mining; data securityThis scenario is adapted from: Slade, S & Prinsloo, P 2013, ‘Learning analytics: Ethical issues and dilemmas’, American Behavioral Scientist, vol. 57, no. 10, pp.?1510–1529. Available at: to contents5.0 Case Study: We lied to our studentsZhang and Moore (2005), two American psychology academics, used deception in a psychology unit in order to teach research ethics. Students in the unit were each given marks for their exams that were 12 points below those that they had actually earned. The students were asked to fill out an evaluation of teaching immediately after they received these results. Following the completion of the evaluation, students were informed of the deception and the 12 marks were reinstated. The ethics of deception in research was then discussed. In the view of the researchers, student involvement and interest in research ethics dramatically increased as a result of this deception.On what basis might you justify deceiving students?What might you do to achieve a balance of the risks of harm and potential benefits in this case?Keywords: deception; consent; harms and benefitsThis scenario is adapted from: Zhang, Y & Moore, KE 2005, ‘A class demonstration using deception to promote student involvement with research ethics’, College Teaching, vol. 53, no. 4, pp. 155–157.DeceptionThere is heated debate over the degree to which deliberate manipulation of information – deception by lying, withholding information or misleading exaggeration – might be warranted in research. Does potential benefit to many justify infractions of the rights of an individual participant? Might an act of deception be justified if the balance of expected benefits over expected harms were greater than would be achieved without deception?James Korn (1997) identified a long history of the use of deception in social psychology in the United States, and concluded that social psychologists had not seen use of deception by researchers as a serious matter for research participants, but simply as part of the typical experiences of everyday life. For example, 53% of articles published in the Journal of Experimental Social Psychology in 2002 employed deception. However, the use of deception has been criticised on the basis that it harms participants, researchers, research professions and society overall.The National Statement refers to a need to ensure deception does not increase the risk of harm, participants will be debriefed and there is ‘no known or likely reason for thinking that participants would not have consented if they had been fully aware of what the research involved’ (p. 20). While not relevant in this case, the National Statement explicitly accepts that limited disclosure in order to reveal illegal activity might be justified on the basis of a harm–benefit analysis and, as one example in the area of higher education research, this might be used to justify covert investigation of ghostwriting services.Harms and benefitsIn this real example, the researchers used their role as educators to deceive their own students. Moreover, they did so by manipulating the marks students thought that they had received. Apart from the impact on their self-esteem, there are several reasons why students might be particularly sensitive to receiving lower grades than they might have hoped – they may depend on higher grades for maintaining a scholarship or visa, for entry into postgraduate courses or finding a job. The researchers argued that the disappointment felt by the students might be justified by the opportunity to learn about the impact of deception in research. The exercise, the researchers suggested, might enable students to engage with the importance of research ethics and empathise with participants as a result of a real, albeit short-lived, experience.Anthony Love is a clinical psychologist and former Chair of the Ethics Committee of the Australian Psychological Society. Despite the long-term use of deception in some parts of experimental psychology, he was concerned that the researchers had abused their relationship with their students.The researchers are in a multiple relationship with the participants, and they use the information they gather in one relationship (the teacher/student) to manipulate responses in the other (the researcher/participant). Given the inherent power imbalance in the former relationship, and recognising its fiduciary nature, deliberate deception involving sensitive and potentially damaging information is egregious, and should not even be contemplated by professional educators who we assume to be sensitive to the ethical implications of their actions.Love argued that there might be other, less hazardous, ways of achieving the same goal and that therefore, if conducted in Australia or by Australian researchers, the experiment would have breached the National Statement as the deception would have increased the risk of harm. The researchers had argued that the students were not in fact distressed. Love countered that: first, the risk had still existed and the fact that harm to participants had not actually eventuated did not justify the researchers taking the risk by deceiving their students; second, measurements of distress gathered at the time of the study or shortly after did not preclude the development of adverse effects over the medium to longer term, and the researchers appeared oblivious to this possibility; third, the researchers also had responsibilities to groups other than the participants:.?.?. if knowledge of the study method led to greater distrust of teachers by future students, or to deeper cynicism about social research in the wider community, then the harm arising from the deception would make it unacceptable. In this case, I suspect the cumulative damage is more than minimal, and so I cannot help feeling sadder but wiser for having learned about this particular study.Deception has been entirely rejected by experimental economists because it contaminates the subject pool (Hertwig and Ortmann, 2008a; 2008b). In his commentary on this case, Andreas Ortmann rejected the experiment as a teaching technique because it might irrevocably disrupt the building of trust between students and teachers, making it difficult to use other more benign demonstrations later on.I am fairly confident that some such deceit would make most students second-guess whatever it is I do from there on. For example, if I were to announce an incentivized classroom demonstration later, why would they believe me? Also, some such antic would surely spill over reputationally to future classes. So, on balance, and given that there is plenty of good material out there that can motivate discussion of deception experiments (e.g. Oliansky, 1991), I don’t see a basis to justify some such classroom exercise.We chose this case because it offers a real example of the use of deception in SoTL. The project would be unlikely to find favour with HRECs, researchers or universities in Australia. One wonders how the researchers might have felt if their own promotions committee had behaved in the same way. However, might there be other circumstances in which some deception might be justifiable in SoTL? In such a situation, how might you make a stronger case in terms of merit and benefits; how might you minimise risks; when and how might you disclose the deception; would this project gain the support of your head of school?Back to contents6.0 Case Study: Yes, Deputy Vice-ChancellorThree associate deans of learning and teaching, three directors of learning and teaching, a senior member of the university executive and a director of the Learning and Teaching Centre sought to investigate staff perceptions of peer review of teaching. They wanted to use an online survey of all staff at the participating university, followed by focus groups. They intended adopting a participatory action research methodology which would evolve over time and hoped to invite academics to co-construct narratives through one-on-one semi-structured interviews. This would make it difficult and, in their view, undesirable to disguise the identity of those respondents who became co-authors.The Human Research Ethics Committee raised concerns about both data confidentiality, and the potential coercion of subjects during recruitment given the seniority of members of the research team. It feared that those who did not want to participate in the project might believe that their professional standing could be jeopardised. The researchers were also aware that some of their colleagues questioned the value of the aim of the project, introducing peer review into universities.How might you seek ethics approval for a project whose methods are likely to evolve over the course of the work?How might you protect staff from possible harm in this project?How might you recognise the work of participants who were also co-authors?Keywords: online research; participatory action research; anonymity; confidentiality; consent; power relations; benefit.This scenario is adapted from: Parsell, M, Ambler T & Jacenyik-Trawoger, C 2014, ‘Ethics in higher education research’, Studies in Higher Education, vol. 39, no. 1, pp. 166–179.Participatory action researchResearchers and research ethics governance structures have tended to concentrate on the need to avoid harming others, but some ethicists have argued researchers’ obligations extend well beyond this. Some have claimed that, in certain situations, we should also act to benefit others. Researchers engaged in action research aim to generate knowledge that would be of value to ‘the well-being of individuals, communities, and for the promotion of large-scale democratic social change’ (Brydon-Miller et al., 2003, p. 11). Building on an action research framework, Hugman et al. (2011) described the possibilities of reciprocal research where ‘research participants are actively involved in all stages and it is they who determine what is to count as a “gain”’ (p. 1279). In Hugman et al.’s investigations of social work:the way that this research began was through establishing relationships that demonstrate actual benefits for the participants .?.?. It then progressed through the direct involvement of participants in all stages of the research, including establishing the agenda and the questions to be asked. Then, following initial stages, the participants continue to be involved in taking action, review and further questioning (p. 1280).The use of participatory action research raises several ethical considerations for researchers, participants and HRECs. Some HRECs are accustomed to dealing with these, but some may not be.Harm to participantsIn the context of research, harm is most often understood in physical terms. However, in social science research, harm may be more likely to involve psychological distress, discomfort, social disadvantage, invasion of privacy or infringement of rights other than prevention of physical injury. So, the researchers need to consider the possibility of their research causing ‘damage to social networks or relationships with others .?.?.’ (National Statement, 2.1).Karolyn White is Director, Research Ethics and Integrity, at the institution where researchers in the case described by Parsell et al. (2014) were based. It would be easy to characterise her committee as simply risk-averse or hostile to participatory action research. However, the story is more complex. Indeed one of the research team was normally a member of this HREC and had to recuse herself from the committee while the project was being discussed. In her commentary on this case study, White recognises the value of the approach taken by the researchers but wants to encourage discussion about how this benefit might be balanced against the risk to participants:Participatory action research is inherently political and critically engaged, with the goal of social/institutional change. This raises ethical issues anyway for researchers, the HREC, and, importantly, for participant/co-researchers. However, when the purpose of the research is to ‘investigate staff perceptions of peer review of teaching’ and, at the same time the ‘researchers were aware that some of their colleagues questioned the value .?.?. of introducing peer review into the university’, then named participant/co-researchers who opposed, or were merely indifferent, to the research supported by senior colleagues and the university executive may, potentially, be harmed by participating in the research .?.?. It would be remiss of the HREC to ignore this potential risk to participants/co-researchers (Karolyn White).Participatory action research often reduces the possibility that research might cause harm by incorporating members of those communities who form the focus of the work in the planning and running of the research. After all, these community members might know more about the dangers represented by the research and the ways of avoiding them. They might also be willing to bear some of the risks, given the potential benefits. More broadly, action research might offermuch more important guarantees for the ethical treatment of human subjects than does conventional research because it: is built on voluntary partnership between a researcher and local stakeholders who form a collaborative team that determines the subject and methods of the work; learns and applies the methods together; analyses the outcomes; designs and implements the actions arising from the process; and together determine representations of that process. Democratic collaboration, co-generation of knowledge, and a commitment to the democratization of human situations are the major guidelines that [action research] follows and so it stands to reason that the interests of the human subjects involved would be respected with care throughout the process (Brydon-Miller & Greenwood, 2006, p. 120).In this vein, the researchers proposed offering participants the opportunity to become co-investigators. Yet, the HREC questioned whether all participants were in a sufficiently autonomous position to form the ‘voluntary partnership’ advocated by Brydon-Miller.CoercionAs we have seen, one of the issues facing the project was whether the request for participation might be seen as coercive given the senior position of the researchers. Faden and Beauchamp (1986) depicted consent as a kind of autonomous action, an act committed intentionally, with understanding and without controlling influences resulting either from coercion or manipulation by others. For Faden and Beauchamp, coercion occurred when someone forced another to act to avoid a threat of harm and it is unlikely anyone could offer consent in the face of coercion or, in many cases, manipulation. However, researchers may find it difficult to assess whether potential participants do have freedom of action. This problem of assessing participants’ freedom of action arises repeatedly when engaging in research on staff in institutions that maintain strong hierarchical structures. Lower-ranking members of the institution might resent the presence of researchers, question their intentions and be concerned that either they face retribution if they fail to be involved in the research, or that the research itself poses a threat to their interests.Participatory action research might compound these fears if it not only places the ability to authorise the research in the hands of management but also includes these senior staff within the research team.The close relationship between researchers and co-researchers/participants, and the eliding of these roles, central to participatory action research, requires a nuanced and appropriate (to method) ethics review. The HREC cannot ignore that this research is being conducted by senior academics and members of the University Executive and involves more junior academics as participant/co-researchers. The confidentiality of data and the potential for coercion must be considered by both researchers and HREC members. Sorting (iteratively) through these issues and resolving them in ways that promote the research and meets the requirements of the National Statement .?.?. may take time, patience and good will (Karolyn White).Brydon-Miller and Greenwood (2006) had to face exactly this issue in work at the University of Cincinnati. In itself, their approach might well be attractive to researchers but their lack of specific and identifiable responses may not always find favour with HRECs:How can we distinguish between caring and coercion in the context of close, ongoing, collaborative relationships? One key to this is to always be cognizant of the power and privilege we carry with us into our interactions with research participants, and at the same time not allow these concerns to immobilize us in working for social change .?.?. Another is to develop avenues for reflection in which we are challenged to examine these relationships and the potential for coercion in a critical manner (2006, p. 120).Following this line of reflection, the researchers published a defence of their approach:Participants needed to know they would be safe to continue performing their day-to-day roles as lecturers, convenors and administrators in the university without any possible negative impact from the research. Even withdrawing as co-participants, as some members have done throughout the course of the research, must be visibly and explicitly permitted (Parsell et al., 2014, pp. 174–5).While critical of the attitudes of the HREC at their university in dealing with the project on which this case study is based, Mitch Parsell and his colleagues (2014) did recognise that engagement with the HREC ‘made the research group mindful that not all relationships are equal, and not all dynamics are immediately visible’ (p. 175).Evolving researchMost guidelines for ethical practice require participants to agree to research before it commences. This can be taken to mean that researchers must have identified the aims and methods to be used within the research before seeking ethics approval and before contacting potential research participants. Unfortunately, such an interpretation privileges a particular hypothetico-deductive model of reasoning, where protocols are developed in advance of any data gathering. However, much qualitative and fieldwork-based research can evolve during the period of data gathering, and may do so quite quickly in the case of participatory action research.This issue is not novel to HREC members and administrators and there are processes available to accommodate any changes to research. It is true, however, that dynamic methodologies such as participatory action research will necessarily involve change. Communication between researchers and the HREC Chair (or other appropriate persons) in all stages, but especially the early stages, of the research will assist both researchers and the HREC to promote ethically good research and to ensure that ethics review is proportional (Karolyn White).Of course, researchers will find it difficult to return to seek approval from a full HREC each time methods change during the course of research. Apart from the delays that would inevitably ensue, it removes agency from the very community the participatory action research aims to empower. If an HREC refuses to allow a change that a community has chosen, where does this place the academic researcher?.?.?. ethics committees are presently ill equipped to deal with ongoing dynamic engagement. The end result is they often attempt to pass participatory action research methodology through a positivistic ethical frame: an ill-conceived effort that is almost certain to be counter-productive (Parsell et al., 2014, p. 176).One way of responding might be to seek the appointment by an HREC of a monitor who might swiftly respond to repeated modifications and might even act as a resource rather than just a gatekeeper. Another might be to reconceive the research not in terms of the action research itself, which is guided by the participants, but in terms of the gathering of data about the action research. However, this approach cannot work where the participants themselves are also academics and therefore bound to the requirements of the National Statement..?.?. these complex ethical issues are not insurmountable but they do require consideration and discussion between researchers and HREC members (Karolyn White).In this spirit, Gary Allen, Senior Policy Officer at Griffith University suggested that an HREC might agree to the following process with the researchers:The original ethical review application describes the ongoing collaborative process with the participants/co-researchers; any description of surveys/interviews provides sample questions/discussion topics that give a sense of the most ethically sensitive line of questioning, so they only need to come back for prior approval (rather than advising for the file) if changes introduce more sensitive lines of enquiry; and authorise the ethics officer, the HREC Chair, or small panel to conduct a proportional review of changes depending upon their ethical sensitivity/risks. HYPERLINK "" \l "contents" Back to contents7.0 Case Study: But what if it didn’t work?Chang and Gray were involved in an Australian Learning and Teaching Council-funded multi-university, multi-year study into learning and teaching with emerging technologies. They reported that some journal editors and peer reviewers were not inclined to publish reports of unsuccessful attempts to incorporate Web 2.0 technologies into teaching. Indeed, ‘several reviewers suggested that, rather than report that staff and students in one subject did not engage with the technology, the authors should “collect and analyse more data” until better engagement could be observed and reported’ (2013, pp.160–161).What would you do if you found that an intervention in learning and teaching that you had evaluated had not been successful?Does it make any difference that your research was supported by Federal learning and teaching funds?How might you respond to this request from reviewers and editors?Keywords: conflicts of interest; research integrity; peer review; publication ethics; research fundingThis scenario is adapted from: Chang, RL & Gray, K 2013, ‘Ethics of research into learning and teaching with Web 2.0: Reflections on eight case studies’, Journal of Computing?in?Higher Education, vol. 25, no. 3, pp.147–165. Available at: –013–9071–9#page-1Research integrityMany academics are under increasing pressure to publish as their institutions seeks to establish or defend their placing in international research rankings. So, individuals are expected to meet publication targets in order to obtain jobs, grants, research contracts or sponsorship. In Australia, research infrastructure is likely to be funded according to the results of a national research performance evaluation, Excellence in Research for Australia (ERA). As this exercise begins to have more of an effect, pressure from institutions on their staff to publish is likely to intensify and there is a danger that we may be creating a research environment vulnerable to strong corrupting pressures.Of course, researchers should be able to rely on the integrity of their colleagues and honesty in the description of their methodology, in recording their analysis, and in reporting their findings. Those who apply or use research outcomes also need to be able to trust the research process. The Australian Code for the Responsible Conduct of Research (National Health and Medical Research Council, 2007) includes within its definition of research misconduct ‘fabrication, falsification, plagiarism or deception in proposing, carrying out or reporting the results of research .?.?.’ (s.10). Facing pressure to publish, to what extent is it appropriate for researchers to select their results in such a way that is likely to find favour with reviewers and editors?For their part, it is perhaps only natural that editors seeking to expand the impact of their journals are more likely to want to publish results that: are remarkable, identify things that work; will be found exciting by readers; and will be cited. At what point does this pressure to produce a volume that will be read become an invitation to engage in research misconduct?Conflicts of interestAlthough not so common in SoTL research, one source of bias may come from funding. in the biomedical field, researchers who obtain financial benefit from industry through research funding, consultancies, royalties or by holding shares in companies have been found to be more likely to reach conclusions in their research that favour their corporate sponsor. On some occasions, they have conducted research of lower quality and less open to peer review.The absence of negative findings from the published literature has been documented in systematic reviews, especially in fast-moving fields in science and medicine. This unwillingness has also come from academics who may choose themselves to not submit them for publication (the ‘file drawer’ effect (Rosenthal, 1979)) .?.?. COPE (Committee on Publication Ethics) notes that journals should publish negative studies (Virginia Barbour).Publication ethicsAnother source of bias might be editorial practices. There is evidence that some journals favour positive results and that this practice is distorting the literature, at least in some disciplines such as management (Leung, 2011). We can imagine what might happen in the longer term if we consider 100 studies of similar quality investigating the impact of educational technology on student learning. Let’s imagine that one-third of studies found educational technology had a negative impact on students’ learning, one-third found it made no difference, and one-third traced a positive impact. If journals then published 25 papers, and the vast majority of these were from the third category, it is not difficult to see how reviewer and editorial bias might be supporting a particular view of educational technology. If authors know this in advance, it is possible some might either design their research to prove positive impact or selectively reveal their data to the same end.Chang and Gray (2013) found their work on the use of Web 2.0 in teaching had not been universally loved by students. One of our commentators recognised:.?.?. the history of research is littered with good ideas that didn’t pan out .?.?. In the case described in Chang and Gray (2013) the ‘good idea’ is Web 2.0, and the harsh reality for the researchers was that at least some of the intended recipients were not interested in engaging with the new technology (Michael Wise).Unfortunately, Chang and Gray discovered that some journals did not want to publish these results and encouraged the researchers to look for more favourable data:a number of journal editors and peer reviewers have been disinclined to publish reports of unsuccessful activities and inclined to dismiss them as the product of inadequate design efforts. For example, several reviewers suggested that, rather than report that staff and students in one subject did not engage with the technology, the authors should ‘collect and analyse more data’ until better engagement could be observed and reported .?.?. (Chang & Gray, 2013, pp. 160–161).Were this sort of decision to be repeated by the key journals in a particular field, it would pose a significant challenge for the development of research there:Censoring observations to remove unhelpful data-points (examples of non-engagement) is particularly problematic as it distorts downstream interpretations. The same applies to tendentious data gathering, where the hope is that more data will produce the desired observation. Apart from obscuring the complete picture, such censorship denies readers/investigators the possibility of discovering the reasons for the lack of engagement (Michael Wise).It may well be that some negative results are not of much value. But as another commentator and experienced journal editor noted, this may not always be the case:Having read the article, I certainly think that it was worthy of publication and showed interesting ‘negative’ results that inform utilizing a particular technology application in teaching which involved many aspects of ethical approaches to research with human participants in an educational context .?.?. Some negative results theoretically and actually may be trivial and just like there is a responsibility for researchers to publish and disseminate their positive results, there may be a responsibility not to waste time and energy in publishing negative results. That may be true if the results are unimportant but not if they illuminate aspects of the research from which others can learn (Deborah Poff).Journal editors may find it difficult to override a reviewer who is critical of negative results in his or her field. While editors may choose to send out papers to additional reviewers, many editors are wary of stepping outside the protection offered by a robust peer review process. Peer review is used by editors to seek advice from experts on the quality of manuscripts submitted for publication. The process confers legitimacy on both the journal and authors. In addition, as Deborah Poff also recognised, editors in some disciplines may feel that their room for manoeuvre is constrained. They may find their discretion fettered by post-publication reviews from publishers or from anonymous reviewers on blogs; each of these can undermine the peer review process.Responding to this case study, Virginia Barbour argued that researchers should tackle bias against negative results as part of a broader publication strategy agreed within the research team and between them and the funders of the work:.?.?. there should be agreement among all study authors at the beginning of the research on what the publication plan would be, regardless of results .?.?. [and] if encountering reviewers and journals unwilling to publish research I would answer questions about methodology, explain the necessity of the study for the completeness of the research literature and if they failed to be convinced, take the work to a more enlightened journal! (Virginia Barbour)Back to contents8.0 Case Study: How do you like our MOOC?The researcher considered how he might investigate participants’ experiences of the opportunities and challenges associated with massive online open courses (MOOCs). The research was to generate a virtual ethnography and make recommendations that would inform the design of future courses.Is consent needed to gather data within a MOOC? If so, how might you negotiate consent? How would you deal with data from participants who failed to respond to requests to use their data?What might justify covert observation?How might you approach the issue of anonymity?Keywords: online research ethics; covert observation; consent; anonymity; harms and benefitsThis scenario is adapted from: Esposito, A 2012, ‘Research ethics in emerging forms of online learning: issues arising from a hypothetical study on a MOOC’, The Electronic Journal of e-Learning, vol. 10, no. 3, pp.?315–325. Available at: issues discussed in: Kanuka, H & Anderson, T 2007, ‘Ethical issues in qualitative e-learning research’, International Journal of Qualitative Methods, vol. 6, no. 2, Article 2. Available at: (accessed 12 January 2015)Back to contents9.0 Case Study: What’s in it for me?During a project that involved making videos of interactions in seminars, some lecturers and students requested access to the videos. The researchers had only gained consent from participants for the videos to be used for research purposes, but some participants claimed that there was no point in gathering the videos if they were of no direct benefit to them. Some students argued that the videos might be of value in supporting their own learning, acting as a record of classes and as reminders of their own contributions. Some teachers were interested in reflecting on their own practices and on exploring how students worked in small groups and solved problems.Whose consent is needed to show the recordings? If so, how might you negotiate consent?Why might you not be able to show the videos?Keywords: consent; dynamic consent; harms and benefits From: Tracy, F & Carmichael, P 2010, ‘Research ethics and participatory research in an interdisciplinary technology-enhanced learning project’, International Journal of Research & Method in Education, vol. 33, no. 3, pp. 245–257.Back to contents10.0 Case Study: Evading reviewDuring work on academics’ views of research ethics review, researchers encountered several lecturers who require their undergraduate students to conduct interviews but who have deliberately chosen not to engage with the university ethics review process. The lecturers argued that the HREC would simply shut down their well-regarded teaching program. The researchers had made a commitment to participants that they will honour confidentiality. On the other hand, the researchers were aware that undergraduate students were being taught to evade research ethics review.If you were the researcher, how would you respond?Are there any circumstances under which you might breach confidentiality, or advise the institution?Would it make any difference to your response if you were undertaking the research at your own institution?Keywords: confidentiality; harms and benefits; research integrityConfidentialityOne of the doubtful joys of research lies in encountering an ethical dilemma, a situation where both the obvious courses of action appear to leave the researcher with a sense of doing something wrong. This example appears to place the researchers in an invidious position – maintain confidentiality and possibly fall foul of the Australian Code, or breach confidentiality at the expense of the promise made, thereby undercutting their own credibility as researchers (failure to adhere to confidentiality assurances provided to participants is arguably also a breach of the National Statement and possibly privacy law).According to the Australian Code, research misconduct ‘includes avoidable failure to follow research proposals as approved by a research ethics committee, particularly where this failure may result in unreasonable risk or harm. It also includes the wilful concealment or facilitation of research misconduct by others’ (s.10.1). On the other hand, the researchers have made a commitment to respect confidentiality.So, should there be any limits to the information we are prepared to protect? Some researchers have raised what Palys and Lowman (2001) call the problem of ‘heinous discovery’ – what should researchers do if they realise that participants were causing harm. It might be argued that deliberately evading the requirements of the National Statement might cause harm by:putting research participants at riskmodelling inappropriate behaviour by encouraging students to evade regulatory requirementslosing an opportunity for the students to have an early introduction to ethics review and related processespossibly denying students insurance cover or protection in the event of a participant complaintrunning the risk that participants who have knowledge of the ethics review process might ‘blow the whistle’ on the project, to the detriment of both the students and the menting on this case study, Lisa Wynn argued:The academics who were avoiding research ethics review for their students might argue that they were building the capacity of their students to engage in ethical conduct by offering them scaffolded opportunities to work through ethics in practice; if they subjected their students to ethics review, the students would not be able to gather research data and this would have serious implications for the students’ academic development and for the future of the disciplines. Indeed, there is evidence that some disciplines feel they have to steer students away from primary data collection until they are postgraduates as the demands of research ethics review are too onerous to meet.Perhaps the most likely response by researchers to this dilemma is described succinctly by another of our commentators, James Arvanitakis:I would breach confidentiality only in the most difficult circumstances. [In this context] If I felt that the well-being of the students or the research subjects were at risk, then yes, I would breach confidentiality.The situation described here in this case study is not unique. Here is an analogous example discussed by Israel (2015), where researchers looked to break through the dilemma by searching for other possible solutions:Indian anthropologists explored the work of auxiliary nurse midwives in Surat (Coutinho et al., 2000). They found some health professionals were halving the dose of a DPT vaccine administered to children in order to reduce the risk of fevers and ensure mothers returned with their children to the clinic. Unfortunately, the lower dose put the children at risk of not gaining immunity. The researchers had to balance the promise of confidentiality to the nursing staff and the harm to their professional standing that might flow from disclosure, against the risk to the children posed by non-disclosure. The team chose to warn those staff who admitted to the practice that children were being placed at risk, and in reporting the research identified the district but not the health centres nor the staff. (p.118)Even in the context of universities, the ‘evading review’ case study is not entirely hypothetical. Colin Thomson and Mark Israel are part of a research team led by Lisa Wynn that encountered precisely this situation. Responding to this case study Wynn, who was also a member of her university’s HREC, argued:I would say nothing. First, there’s the issue of confidentiality and the promises that I have made to my research informants. I can’t guarantee confidentiality if I report back to others at their institution that there are people doing research without ethics approval. Institutions can be weirdly small places, and reporting this to administrators could get people in trouble after they trusted me enough to tell me sensitive information. When I asked them to participate in my research project, I made a commitment to protecting my research participants.Secondly .?.?. In the case of teachers sending out their students to interview people, I wouldn’t feel morally obliged to report this unless I have reason to think their research could harm their research participants – and in that case, my first step would be to inform my research participants, not their ethics committee, that there’s a problem they should rectify.I’ve done research on ethics review at universities across Australia, where I found that people are doing research without ethics approval at every university I visited. Most ethics administrators were aware of this. They’re not na?ve. Most didn’t want to be seen as an audit institution or as the ‘ethics police’, hunting down non-compliers, so they didn’t seek out non-compliers unless someone else complained about a specific researcher.In such a context, disseminating my research results to all the institutions at the end of my research project is far more valuable than reporting the misconduct of specific researchers, because it will make universities and ethics committees aware of the problem without violating the trust I’ve built with my research participants. (Lisa Wynn)So, working by analogy with the Indian midwives case, it may be possible to identify a broader range of potential responses when uncovering examples of ‘evading review’. First, as Wynn suggests, we might be able to alert colleagues to the ramifications of evading ethics review for the student projects. Secondly, it might be an opportunity to give staff models of arrangements of reviewing student coursework-based research projects that are effective at other institutions, so they can advocate for change at their own institution.My response should not be seen as reifying ethics procedures. Many human ethics procedures have been implemented with the worst-case scenario in mind. Some are so unpractical, that the ethics procedures themselves encourage you to react unethically. As such, this would be the ideal opportunity to encourage the academics themselves to review the ethics procedures and open up dialogue with the ethics review panel about the ethics process. (James Arvanitakis)Finally, again following the strategy advocated by Wynn, as long as participants could not be identified, the researchers might consider discussing the fact that the research undertaken by some cohorts of students appears not to have been submitted for ethics review, the reasons why this might be occurring, and positive strategies the institution could adopt.Building a commitment to research integrityThis kind of situation could prompt us to reconsider how we might build the capacity of researchers to engage with issues of ethics and integrity. SoTL has had little impact on research ethics education and training in Australia. Disappointingly, many universities have focused primarily upon risk management, bureaucratic systems and sanctions in order to compel researcher compliance with national standards. Israel et al. (in press) have argued that such approaches may foster an adversarial culture – resistance, ill will and avoidance. This is evident in this case study where the academic is not only resisting bureaucratic oversight but socialising students into this practice. Instead, in the view of one leading educator, ethics should be seen as.?.?. part of the research process not something you simply ‘add on’ when it is convenient. In this way, when we work with our undergraduate students to build a research culture, part of that is [to] have a clear and concise discussion about the way ethics is not separate to research, but at its core. (James Arvanitakis)Back to contents11.0 Case Study: Life in the virtual worldResearchers asked students to investigate behaviour in 3-dimensional virtual learning environments (3DVLEs) run by various universities.If you were one of the researchers, what assurances of confidentiality should the students offer?How should the students negotiate free and informed consent?What risks might interviewees incur by talking to the students?As part of the process of negotiating consent with key informants, the researchers offered to preserve the anonymity of the virtual world, the institution and individuals, although they pointed out that it might be difficult to stop other members of the virtual world identifying particular individuals.The researchers gave an interview to a national newspaper about student culture, avoiding any details about this particular research project. The newspaper reported the work as covertly digging up the dirt on student relationships. The researchers heard that several research participants were distressed, believing that what they have said to the research project will be identified by other students.The universities involved in the research: send the researchers a letter claiming that they misrepresented their research interest; complain to the company that hosts and manages the virtual world; block access to their virtual worlds and instruct the researchers not to publish data from this site.If you were one of the researchers, what would you do in response to:the media portrayal of your research?student distress?the letter from the universities?the letter to the host company?Keywords: confidentiality; harms and benefits; informed consent; online research ethics; dissemination of resultsThis scenario and the subsequent analysis is adapted from: Wood, D, Bloustien, G, Kerin, R, Kurzel, F, Seifert, J, Spawton, T, Wache, D, Green, L, Snyder, I, Henderson, M, Sim, J, Marsh, J, Maxwell, I, Israel, M, Butler, D & Stupans, I 2013, Facilitating Flexible, Enquiry-Based Experiential Learning through an Accessible, Three-Dimensional Virtual Learning Environment (3DVLE). Sydney: Australian Government Office for Learning and Teaching. Available at: research ethicsContemplating research ethics in virtual worlds entails extending, questioning and sometimes contradicting wisdom developed for real worlds. This poses significant challenges for ethics codes and guidelines that are already struggling to extend their reach throughout the real world:As social networking, hyperblogging, folksonomies, Wikis etc., continue to change social interaction, research itself, and thus research ethics must change. (Buchanan, 2011, p. 103)While there is now an emerging literature on online research ethics, including work by the Association of Internet Researchers (Markham & Buchanan, 2012), the research ethics literature in relation to virtual worlds remains underdeveloped. Compared to other internet-based research, there are reasons to believe that virtual worlds may raise different issues for researchers partly because they have a greater resemblance to a real world environment.Given the development of new media with their accompanying shifts in the ways identities are mediated, jurisdictional boundaries can be transcended and social boundaries transgressed, online researchers have had to tread carefully and have had to learn appropriate social norms from and negotiate ethical research conduct with the people who occupy the spaces that are being studied (Grimes et al., 2009). This is particularly appropriate in virtual space which ‘is heterogeneous, and, though the fundamentals of it have been provided by designers, it is equally constituted by the presence of human agents’ (Rosenberg, 2010, p. 28).There are several ways social scientists might locate themselves as researchers of virtual worlds: first, the world might be created specifically to allow external observation; second, the researcher may be engaged as a participant through his or her avatar; third, researchers might access logs and records generated through activity within the virtual world (Reynolds & de Zwart, 2010).ConsentThe principle that researchers should have respect for persons is often interpreted as requiring social scientists to obtain consent from participants. Traditionally, in many jurisdictions, regulators have assumed that consent involves investigators distributing information sheets to prospective participants, explaining the nature of the study and obtaining voluntary and informed consent. Participants’ consent is often documented using a form that may then be signed by the participant. In Second Life, participant information forms can be provided via a ‘notecard’, but there is currently no way for an avatar to sign the ‘notecard’. Instead, Boellstorff (2009) asked participants to send him a message indicating consent, and this was a practice adopted by one of our commentators (Melissa de Zwart):Informed consent will require modification to suit the individual platform, but a written form should be sent through the relevant chat or text channels. In [Second Life] I have taken screen shots of this occurring.Although use of a written form is a common practice in real world environments, it is not the only way of recording consent. Various researchers have filmed or taped consent or have, like Melissa de Zwart in virtual worlds, used a hybrid approach. In other cases, participants’ consent has been assumed from their decision to take part in the research. In many research practices, consent does not take place at just one point but is part of an ongoing negotiation. There seems little reason to believe that such a continuing approach would be inappropriate in the virtual world as well:.?.?. rather [than] the simple transaction of a form that is signed there may need to be a careful discussion about confidentiality, to ensure there is a shared understanding of how comments and information will be reported, the measures to ensure confidentiality, and any limitations to those measures. There of course will probably still need to be a form – but probably one with optional clauses that are agreed between the student and potential participant. There might need to be a step to reconfirm consent later (Gary Allen).While the principle of consent of course also operates in virtual worlds, as Elizabeth Buchanan notes in her commentary, additional challenges may arise. In most cases, avatars appear to and actually do constitute participants for the purposes of human research:On a virtual island, obtaining consent presents logistical, regulatory, and disciplinary challenges; the researcher is of course aware that he/she is interacting with an avatar. An avatar meets the traditional regulatory conditions of a ‘human subject’ if a living person’s behavior is being observed through the actions/interactions of the avatar, and information about the person/avatar is being obtained for research purposes, or if identifiable private information about a living person is being obtained through some research activity, and thus, consent is an integral component. This will change if the avatar is controlled by a bot, which may not fall under a human subjects criteria, however, so a researcher must be fully aware of the types of interactions he/she is having on a virtual island (Elizabeth Buchanan).Throughout the research, the researcher may not know who the real person is. However, this is not an unusual situation for many disciplines:.?.?. this is the inherent methodological risk of many fieldwork disciplines: ‘is this person really who they purport to be?’ is not a question that arises only in virtual environments! (Ian Maxwell)In the same way that practices of negotiating consent should differ in the real world depending on the methodology adopted and the nature of participants, so we ought to expect variation in the virtual world. Indeed, Rosenberg (2010) has suggested that people have quite widely varying expectations of negotiation in Second Life:Three basic standpoints can be identified in relation to where people draw the line for intrusion or exploitation .?.?. The first group states that they do not ever want to participate in any type of research without informed consent. The second group holds that researchers may observe and collect data without consent as long as they don’t interact or interfere. The third group thinks that it’s acceptable to collect information and interact without consent, but they say that researchers must not deceptively develop close relationships with people to gain information. Furthermore, it appears to be a general assumption shared by all three groups that names and quotes are not to be used in research publications without informed consent. (p.32)Various researchers have sought to interpret the range of public and private spaces, residents and interactions in virtual worlds in order to assess the degree to which consent needs to be negotiated and what matters these negotiations might cover (McKee & Porter, 2009a; 2009b). Not surprisingly, conclusions about what might be appropriate ethical conduct for research have been context-specific.Boellstorff (2009) noted that covert observation and breach of privacy are all too easy in the virtual world:.?.?. in the context of the internet there appears to be little remaining expectation of privacy. Typically residents knew that anything that they said could be recorded by Linden Lab, by residents nearby, or by a scripted object hidden on a piece of land, and that such information could be disseminated via a blog or other form of website (Boellstorff, 2009, p. 82).As Boellstorff acknowledged, there are now some opportunities and conventions relating to overt and covert, and public and private that have been established in the virtual world that might seem somewhat bizarre in the real world:For instance, a participant observer on an island will make him/herself known through an array of strategies. Some wear ‘hats’ or wear shirts which identify themselves as researchers. Others will approve an individual and hand them a card or token, inviting them to a consent room, where they can discuss the research and review an information sheet or consent document. Others will have a recruitment island, where the researcher awaits participants, based on a script or recruitment posted elsewhere (Elizabeth Buchanan).Researchers who enter virtual worlds as participants will be expected to comply with the End User License Agreement and/or the Terms of Service Agreement generated by the virtual world provider (Reynolds and de Zwart, 2010).Given that the hypothetical involves the collection of data on virtual worlds run by various universities, a researcher might also be expected to obtain permission from the proprietor of the platform:That consent may be implicit in the User Agreement (or similar). The bottom line, however, is that the virtual world is a corporate space, so permission should be sought to conduct research in that environment .?.?. The big problem there would be identifying a responsible authority able to grant that authority (Ian Maxwell).In addition, the universities themselves also have a legitimate expectation that they will be consulted:A researcher seeking to recruit students in a real world campus for such a research project would be required to seek such an endorsement, generally from a Provost/PVC level officer and, additionally, would be well advised to seek ethics ratification of their own institutional ethics approval by the institution in which the research is to take place (Ian Maxwell).ConfidentialityIn this hypothetical case, students have been conducting research within a series of small social groups. As part of consent discussions, the researchers indicated they would endeavour to maintain anonymity of the virtual islands, the institutions and individuals. It seems, however, that this has been accompanied by a noteworthy caveat – there is a good chance that individuals within any one virtual island might recognise or identify one another through the work’s outcomes. To some extent, the researchers have promised what Tolich (2004 p. 1) calls external confidentiality (that is, ‘traditional confidentiality where the researcher acknowledges they know what the person said but promises not to identify them in the final report’) while acknowledging that internal confidentiality (‘the ability for research participants .?.?. to identify each other in the final publication of the research’) is impossible to uphold.If individuals participate in our research because they believe their confidentiality will be safeguarded, then serious ethical questions are raised if we fail to deliver on our assurances. In practice, even if pseudonyms are used, quotations or sufficiently rich descriptions may be identifiable by the other denizens of the virtual island (Gary Allen).Some research is problematic because of the case study design which makes it possible to deduce the identity of research participants once you know the location of the study. Involving several virtual worlds and ensuring that participants know that multiple locations will be used, reduces the ease with which such deduction is possible. Participants ought therefore to be able to negotiate the degree to which they are identified:It is incumbent upon the researcher to present options for identification for the participants within the informed consent process. Confidentiality, privacy, and anonymity assume different meanings across venues and across islands, depending on the contextual specificity of those spaces. Moreover, an avatar has an identity, which possesses a unique online reputation. That individual should be able to decide if he or she wants that identity presented in the research, or if a pseudonym for the avatar should be used. Thus, old, stock language used in traditional research which often says, ‘We will protect your identity and only the researcher will be able to identify you with your data,’ is an antiquated notion in online worlds in particular (Elizabeth Buchanan).Boellstorff (2009) decided to maintain confidentiality in his ethnographic study of virtual worlds – ‘Even when residents said I could name them, I have employed pseudonyms so as not to inadvertently identify their friends .?.?.’ (p. 82). He used pseudonyms for the virtual world identities of participants, paraphrasing quotes so that they could not be found using search engines, creating synthetic aggregates of groups of people, and altering any identifying details.Our discussion of the possibilities of covert observation in virtual worlds has already raised the problem that researchers are not in a position to offer full confidentiality. Apart from facing the difficulty of avoiding eavesdropping within the virtual world, any data collected may end up on a remote server within a jurisdiction and subject to a variety of legal claims by organisations ranging from law enforcement agencies to the corporation that owns the servers.So researchers .?.?. would certainly not be able to vouch to participants that that data could not be accessed by a third party .?.?. This is perhaps the most difficult and, I suspect, intractable ethical problem confronting research in this area: the ethical requirement to protect data – not the publication of that data as findings, but the data itself – from third parties. When that data exists, even momentarily, in a corporate space where that corporation has not accepted an explicit contractual agreement to guarantee its security, then there is a profound ethical risk (Ian Maxwell).Researchers are generally expected to minimise risks of physical, psychological, social or economic harm or discomfort to participants in accordance with the principle of nonmaleficence. In this example, there is no reasonable prospect of direct physical harm to participants in the virtual world. Although there may be a possibility of a researcher being present while a participant suffers emotional harm, this might be quite difficult to detect in the virtual world.Given the existence of a variety of harmful behaviours in 3DVLE, the supervisor of the students and his or her institution has a responsibility to take reasonable steps to protect the safety of students. Ian Maxwell suggested that this.?.?. can be discharged through both the provision of appropriate methodological and professional conduct training, and through the agreement of student and supervisor/teacher of a safety protocol that accurately and frankly assesses potential risk and puts in place processes for reducing those risks.Research in virtual worlds can also be seriously compromised by the activities of parties external to the research – in this case, media, granting organisations, owners and managers of the virtual worlds:.?.?. the danger in researching in any virtual community is that the research, no matter how sensitively conducted, can direct too much attention to that community .?.?. (Melissa de Zwart).We deliberately constructed this scenario to allow parallels with a similar hypothetical that one of the team had set in ‘real life’ (Israel and Hay, 2006). In that exercise, two geographers maintained that researchers should anticipate how results could be made public and ought to take responsibility for the flow of information. Put bluntly, they concluded that it was not acceptable for researchers to lambaste others for causing adverse outcomes while absolving themselves.Researchers have an ethical obligation to fully disclose relevant information to potential respondents and ensure that their research is not misrepresented (Winchester and Rofe, 2006).Melissa de Zwart reached a similar conclusion in this virtual world scenario, noting that media reaction to behaviour in virtual worlds tended to situated somewhere between the negative and the hysterical:Most mainstream media only want the bad news, scary stories. So any interview granted to the media is likely, however carefully conducted, to repeat only the horror stories. Good news does not get reported. Therefore, it is very important to anonymise the virtual community as far as possible. I would suggest (although it won’t help in this scenario) that the researcher avoids or minimises any contact with the media or discussion of the project, until it is completed .?.?.There are some [virtual world] focused reporters who could assist in correcting the report .?.?. The project web site should correct the misreporting. It could also offer a link for confidential correspondence with project participants with further questions.The strength of the researcher’s ethical position may depend on the degree to which he or she was at fault in failing to anticipate the ways in which research results might be broadcast and failing to minimise the risk and negotiate appropriate consent.In this hypothetical, the universities involved in the research reacted angrily to public statements about the research, blocked access to the virtual worlds and demanded that data not be published. Depending on the particular circumstances, the owners might have the legal authority and ability to prohibit a researcher from entering the virtual world, but whether they have ethical or legal domain over the data is another plaints to the host company may have consequences if misconduct is linked to the researcher’s university and the university holds real estate in that host’s virtual world. In 2007 and again in 2010, Second Life administrators removed an island owned by Woodbury University in response to ‘incidents of grid attacks, racism and intolerance, persistent harassment of other residents, and crashing the Woodbury University region itself while testing their abusive scripts’. Independent commentators found it difficult to assess the level of the abuse but raised concerns about the lack of transparency in the abuse reporting process.Melissa de Zwart suggested that the researcher ought to contact the participants, the universities and the provider to defend the research project, countering the media’s representation of the research and reiterating that data has been anonymisedThere is a danger that if the research attracts bad publicity your project can be shut down as a misuse of the platform .?.?. I would provide the consent form to the provider and explain that the outcomes identified in the media could not be released under the terms of the research project.However, Ian Maxwell concluded that matters had progressed well beyond the point at which an individual researcher might sensibly respond without the support of his or her institution:I would urge that researcher to immediately notify the relevant ethics committee about the problems that arose, so as to at the very least garner institutional support and protection. If the research proposal has been properly designed, approved and documented, then there will be resources that will significantly take some of the heat, including access to legal counsel to help deal with the various corporations. I would advise, strenuously, against trying to take on corporations without that institutional support.Back to contents12.0 About the contributorsGary Allen, Senior Manager, Research Ethics and Integrity, Griffith University, AustraliaJames Arvanitakis, Western Sydney University, AustraliaVirginia Barbour, Committee on Publication EthicsElizabeth A. Buchanan, Center for Applied Ethics, University of Wisconsin-Stout, United StatesMelissa de Zwart, Adelaide Law School, University of Adelaide, AustraliaAnthony Love, College of Arts, Victoria University, AustraliaIan Maxwell, Department of Theatre and Performance Studies, University of Sydney, AustraliaAndreas Ortmann, Australian School of Business, University of New South Wales, AustraliaDeborah Poff, Brandon University, Canada; Committee on Publication EthicsKarolyn White, Research Ethics and Integrity, Macquarie UniversityMichael Wise, University of Western Australia, Australia; Committee on Publication EthicsLisa Wynn, Macquarie University, AustraliaBack to contents13.0 ReferencesBoellstorff, T 2009, Coming of Age in Second Life: An Anthropologist Explores the Virtually Human, Princeton University Press, Princeton.Brydon-Miller, M & Greenwood, D 2006, ‘A re-examination of the relationship between action research and human subjects review processes’, Action Research vol. 4, no. 1, pp. 117–128.Brydon-Miller, M, Greenwood, D & Maguire, P 2003, ‘Why action research?’, Action Research, vol. 1, no. 1, pp. 9–28.Buchanan, EA 2011, ‘Internet research ethics: Past, present, and future’ in M Consalvo and C Ess (eds.) The Handbook of Internet Studies, Wiley-Blackwell, pp.83–108.Burman, ME & Kleinsasser, AM 2004, ‘Ethical guidelines for use of student work: moving from teaching’s invisibility to inquiry’s visibility in the scholarship of teaching and learning’, The Journal of General Education, vol. 53, no. 1, pp.59–79.Chang, RL & Gray, K 2013, ‘Ethics of research into learning and teaching with Web 2.0: Reflections on eight case studies’, Journal of Computing?in?Higher Education, vol. 25, no. 3, pp.147–165. Available at: –013–9071–9#page-1 (viewed 4 December 2015).Committee on Publication Ethics 2011, ‘Code of conduct and best practice guidelines for journal editors’, Committee on Publication Ethics, London. Available at: (viewed 28 June 2015).Coutinho, L, Raje, G & Bisht, S 2000, ‘Numerical narratives and documentary practices: Vaccines, targets and reports of the immunization programme’, Economic & Political Weekly vol. 35, no. 8–9, pp. 656–666.DeVito, S 2010, ‘Experimenting on law students: Why imposing no ethical constraints on educational research using law students is a bad idea and proposed ethical guidelines’, Southwestern Law Review, vol. 40, pp.?285–321.Dickert, N & Grady, C 1999, ‘What’s the price of a research subject? Approaches to payment for research participation’, New England Journal of Medicine, vol. 341, no. 3, pp. 198–203.Esposito, A 2012, ‘Research ethics in emerging forms of online learning: Issues arising from a hypothetical study on a MOOC’, The Electronic Journal of e-Learning, vol. 10, no. 3, pp. 315–325. Available at: (viewed 4 December 2015).Faden, RR & Beauchamp, TL 1986, A History and Theory of Informed Consent, Oxford University Press, New York.Gale, R 2002, ‘Commentary on Suzanne Burgoyne’s case’, in Hutchings, P (ed.) Ethics of inquiry: Issues in the scholarship of teaching and learning, The Carnegie Foundation for the Advancement of Teaching, Menlo Park, CA, pp. 39–40.Galguera, T 2002, ‘Too close for comfort and/or validity case’, in Hutchings, P (ed.) Ethics of inquiry: Issues in the scholarship of teaching and learning, The Carnegie Foundation for the Advancement of Teaching, Menlo Park, CA, pp. 55–59.Grayson, JP & Myles, R 2004, ‘How Research Ethics Boards are undermining survey research on Canadian university students’, Journal of Academic Ethics, vol. 2, pp. 293–314. Grimes, JM, Fleischman, KR & Jaeger, PT 2009, ‘Virtual guinea pigs: Ethical implications of human subjects research in virtual worlds’, International Journal of Internet Research Ethics, vol. 2, no. 1,pp. 38–56. Available at: (viewed 4 December 2015).Heath, C, Hindmarsh, J & Luff, P 2010, Video in Qualitative Research, Sage, London.Hertwig, R & Ortmann, A 2008a, ‘Deception in experiments: Revisiting the arguments in its defense’, Ethics & Behavior, vol. 18, no. 1, pp. 59–92.Hertwig, R & Ortmann, A 2008b, ‘Deception in social psychological experiments: Two misconceptions and a research agenda’, Social Psychology Quarterly, vol. 71, no. 3, pp. 222–227.Holcomb Jr, JP 2002, ‘The ethics of comparison: a statistician wrestles with the orthodoxy of a control group case’, in Hutchings, P (ed.) Ethics of inquiry: Issues in the scholarship of teaching and learning, The Carnegie Foundation for the Advancement of Teaching, Menlo Park, CA, pp. 19–21.Hugman, R, Pittaway, E & Bartolomei, L 2011, ‘When “do no harm” is not enough: The ethics of research with refugees and other vulnerable groups’, British Journal of Social Work, vol. 41, no. 7, pp. 1271–1287.Hutchings, P (ed.) 2002, Ethics of inquiry: Issues in the scholarship of teaching and learning, The Carnegie Foundation for the Advancement of Teaching, Menlo Park, CA.Israel, M 2015, Research Ethics and Integrity for Social Scientists: Beyond Regulatory Compliance, Sage, London.Israel, M, Allen, G & Thomson, C 2016, ‘Australian research ethics governance: Plotting the demise of the adversarial culture’, in van den Hoonaard, W & Hamilton, A (eds) The Ethics Rupture: Exploring Alternatives to Formal Research-Ethics Review, University of Toronto Press, Toronto, pp. 285–316Israel, M & Hay, I 2006, Research Ethics for Social Scientists: Between Ethical Conduct and Regulatory Compliance, Sage, London.Kanuka, H & Anderson, T 2007, Ethical issues in qualitative e-learning research. International Journal of Qualitative Methods, vol. 6, no. 2, Article 2. Available at: (viewed 13 January 2014).Korn, JH 1997, Illusions of Reality: A History of Deception in Social Psychology, State University of New York Press, Albany NY.Leung, K 2011, ‘Presenting post hoc hypotheses as a priori: Ethical and theoretical issues’, Management and Organization Review, vol. 7, no. 3, pp. 471–479.Markham A & Buchanan E 2012, Ethical Decision-Making and Internet Research Recommendations from the AoIR Ethics Working Committee (Version 2.0). Available at: (viewed 23 December 2013).McKee, HA & Porter, JE 2009a, Ethics of internet research: A rhetorical case-based approach, Peter Lang, New York.McKee, HA & Porter, JE 2009b, ‘Playing a good game: Ethical issues in researching MMOGs and virtual worlds’, International Journal of Internet Research Ethics, vol. 2, no. 1, pp. 5–37. Available at: (viewed 4 December 2015).National Health and Medical Research Council (NHMRC) 2007, National Statement on Ethical Conduct in Human Research. Available at: (viewed 7 March 2015).O‘Toole, P 2013, ‘Capturing undergraduate experience through participant-generated video’, The Qualitative Report, vol. 18, no. 66, pp. 1–14. Available at: (viewed 4 December 2015).Oliansky, A 1991, ‘A?confederate’s?perspective?on?deception’, Ethics and?Behavior, vol. 1, no. 4, pp. 253–258.Palys, T & Lowman, J 2001, ‘Social research with eyes wide shut: The limited confidentiality dilemma, Canadian Journal of Criminology, vol. 43, no. 2, pp. 255–267.Parsell, M, Ambler T & Jacenyik-Trawoger, C 2014, ‘Ethics in higher education research’, Studies in Higher Education, vol. 39, no. 1, pp. 166–179.Pink, S 2013, Doing Visual Ethnography, Sage, London.Reynolds, R & de Zwart, M 2010, ‘The duty to “play”: Ethics, EULAs and MMOs’, International Journal of Internet Research Ethics, vol. 3, no. 1, pp. 48–68. Available at: (viewed 4 December 2015).Rosenberg, A 2010, ‘Virtual world research ethics and the private/public distinction’, International Journal of Internet Research Ethics, vol. 3, no. 1, pp. 23–37. Available at: (viewed 4 December 2015).Rosenthal, R 1979, ‘File drawer problem and tolerance for null results’, Psychological Bulletin, vol. 86, no. 3, pp. 638–41.Sikes, P & Piper, H 2010, ‘Ethical research, academic freedom and the role of ethics committees and review procedures in educational research’, International Journal of Research & Method in Education, vol. 33, no. 3, pp. 205–213Slade, S & Prinsloo, P 2013, ‘Learning analytics: Ethical issues and dilemmas’, American Behavioral Scientist, vol. 57, no. 10, pp.?1510–1529. Available at: (viewed 4 December 2015).Stockley, D & Balkwill, L-L 2013, ‘Raising awareness of research ethics in SoTL: The role of educational developers’, The Canadian Journal for the Scholarship of Teaching and Learning, vol. 4, no. 1, Article 7. Available at: (viewed 4 December 2015).Tolich, M 2004, ‘Internal confidentiality: When confidentiality assurances fail relational informants’, Qualitative Sociology, vol. 27, no. 1, pp. 101–106.Tracy, F & Carmichael, P 2010, ‘Research ethics and participatory research in an interdisciplinary technology-enhanced learning project’, International Journal of Research & Method in Education, vol. 33, no. 3, pp. 245–257.Wilson, JH 2008, ‘The value and ethics of the scholarship of teaching and learning’ in Meyers, SA & Stowell JR (eds), Essays from excellence in teaching (Vol 8), Society for the Teaching of Psychology. Available at: (viewed 13 January 2014).Winchester, HPM and Rofe, MW (2006) ‘Ethical research practice: Full disclosure and reporting of results’, in Israel, M and Hay, I Research Ethics for Social Scientists: Between Ethical Conduct and Regulatory Compliance, London, Sage.Wood, B & Kidman, J 2013, ‘Negotiating the ethical borders of visual research with young people’, in: te Riele, K & Brooks, R (eds) Negotiating Ethical Challenges in Youth Research, Routledge, London, pp. 149–162.Wood, D, Bloustien, G, Kerin, R, Kurzel, F, Seifert, J, Spawton, T, Wache, D, Green, L, Snyder, I, Henderson, M, Sim, J, Marsh, J, Maxwell, I, Israel, M, Butler, D & Stupans, I 2013, Facilitating Flexible, Enquiry-Based Experiential Learning through an Accessible, Three-Dimensional Virtual Learning Environment (3DVLE), Australian Government Office for Learning and Teaching, Sydney. Available at: (viewed 4 December 2015)Zhang, Y & Moore, KE 2005, ‘A class demonstration using deception to promote student involvement with research ethics’, College Teaching, vol. 53, no. 4, pp. 155–157.Back to contents14.0 AcknowledgementsThis resource was commissioned by the Australian Government Office for Learning and Teaching, and produced by Prof. Mark Israel (University of Western Australia), Dr Gary Allen (Griffith University) and Prof. Colin Thomson (University of Woolongong). You can find a summary of our biographies here and a summary of our work here.The authors would like to acknowledge our various researchers, ethics reviewers and professional staff colleagues who have shared with us the ethical challenges and frustrations they face and useful strategies to avoid, minimise and otherwise mitigate against the problems and delays they can sometimes cause. The case study, ‘Life in the virtual world’, is adapted from Wood et al. (2013) with the permission of the Australian Government Office for Learning and Teaching.The matters discussed in this Resource Manual are discussed in more detail in the Griffith University Research Ethics Manual. The GUREM is available for purchase by research institutions.Back to contents15.0 GlossaryBack to contents ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download