Chapter One



Case Study: An Exploration of Distributed Leadership and its Relation to Establishing a Culture of Evidence at a Two-Year Public InstitutionA dissertation submittedbyJacqueline E. SnydertoBenedictine Universityin partial fulfillmentof the requirements for the degree ofDoctor of EducationinHigher Education and Organizational ChangeThis dissertation has been accepted for the facultyof Benedictine UniversityDr. Sarah MaloneDissertation Committee DirectorDateDr. Tamara KorenmanDissertation Committee Chair DateDr. Laura SaretDissertation Committee ReaderDateProgram Director, FacultyDate_________________________________________________________________FacultyDateDean, College of Education and Health ServicesDateAbstractHigher education leaders are responsible and answerable to many constituents. Society and national accreditors have used countless factors to measure levels of institutional success and to provide evidence that meet accountability expectations. This culture of evidence is the expectation and it encompasses reliable measures that are systemic, data-driven, and comprehensive in nature to understand the quality of an institution. It is a working assumption that higher education leadership is often organized and executed in a variety of ways depending on the higher education institution. A distributed view of leadership incorporates the activities of multiple groups of individuals in an institution that works at guiding and mobilizing individuals within the institution to implement an initiative or change process. At a theoretical level, distributive leadership is an exploratory framework for understanding leadership practice. The purpose of the study was to explore how a distributed model of leadership practice influences the organizational development of a culture of evidence at a higher education institution.The relevant theories for this study include Schein’s organizational culture theory and 2010 culture-embedded mechanisms, distributive leadership practice as defined by Spillane (2008), and Suskie’s leadership and culture of assessment concept. To complete the case study, the following questions were explored through interviews, historical documents, observations, and artifacts: how does this institution apply distributed leadership to advance a culture of evidence?; who are the institutional members involved and engaged as distributed leaders at this institution?; what demonstrated behaviors and actions do these distributed leaders use to nurture the current culture of evidence at this institution?; how effective is the distributive leadership model in meeting the expectations associated with a culture of evidence?; and, what distributive leadership themes are identified with promoting a culture of evidence? Through this exploration, best practices and models of leadership practices to meet contemporary expectations surrounding a culture of evidence were identified, and the effectiveness of a distributive leadership model was evaluated by an external accreditation body. A flexibility to the distributed leadership model was found that allows for different groups to lead the culture of evidence. The leaders, regardless of group, identified a shared responsibility to foster the culture of evidence that resulted in parallel routines being executed by various leaders, and the work was often been duplicated across the campus. Thus, not one person or office was identified as responsible for leading the evidence culture. Two committees were identified as promoting and organizing activities to meet a culture of evidence expectations. Primary embedded mechanisms speak to the visible actions of the leaders. These include leaders’ actions, or role modeling, that is often emulated by people in the organization. Secondary mechanisms identified by Schein (2010) are considered to be methods that a leader may indirectly change in an organization’s culture. The secondary mechanisms, those behind the scenes engagements that are indirectly altered by actions and often affect culture, were evident at this institution. The term organic process surfaced as a theme throughout the research. This acknowledgment of owning the culture of evidence and being flexible in how to address stakeholder expectations are central to the success and effectiveness of meeting accreditation standards. Distributed leader themes were identified through observation and interviewees’ comments. Collaboration was a consistent theme, one that was observed by the researcher and one that was described as necessary by the interviewees. Results identified disconnected themes such as the consensus of the importance of meeting accreditation standards and fostering a culture of evidence. A culture of evidence is an understood expectation; however, interviewees do not clearly understand the importance of the role he or she plays in building that culture of evidence. Finally, after multiple years of follow-up accreditation reports, this institution completed a decennial accreditation visit that resulted in full reaccreditation with minimal follow-up documentation due to the accrediting body. Distributive leadership in a small public institution can foster a culture of evidence, one that has the potential to mature and improve the quality of documented evidence to meet stakeholders’ expectations.Table of Contents TOC \o "1-5" \h \z \u Chapter One PAGEREF _Toc461458842 \h 1Introduction PAGEREF _Toc461458843 \h 1Problem PAGEREF _Toc461458844 \h 3Purpose of the Study PAGEREF _Toc461458845 \h 8Theoretical Framework PAGEREF _Toc461458846 \h 11Leadership and Culture PAGEREF _Toc461458847 \h 13Remaining Chapters PAGEREF _Toc461458848 \h 13Unique Terms PAGEREF _Toc461458849 \h 14Chapter Two PAGEREF _Toc461458850 \h 16Review of the Literature PAGEREF _Toc461458851 \h 16Leadership and organizational culture PAGEREF _Toc461458852 \h 16Primary mechanisms PAGEREF _Toc461458853 \h 16Secondary mechanisms. PAGEREF _Toc461458854 \h 17Organizational culture theory PAGEREF _Toc461458855 \h 18National culture and organizational culture PAGEREF _Toc461458856 \h 19National culture of higher education accountability supports a culture of evidence PAGEREF _Toc461458857 \h 20Culture of evidence PAGEREF _Toc461458858 \h 21Suskie’s concept: Enhancing an institution’s culture for evidence. PAGEREF _Toc461458859 \h 24Distributive leadership PAGEREF _Toc461458860 \h 25Conclusion PAGEREF _Toc461458861 \h 28Chapter Three PAGEREF _Toc461458862 \h 29Methodology PAGEREF _Toc461458863 \h 29Ethnography PAGEREF _Toc461458864 \h 29Case study PAGEREF _Toc461458865 \h 32Data collection PAGEREF _Toc461458866 \h 33Interviews PAGEREF _Toc461458867 \h 35Confidentiality PAGEREF _Toc461458868 \h 37Limitations and delimitations PAGEREF _Toc461458869 \h 38Documents PAGEREF _Toc461458870 \h 39Data analysis: coding PAGEREF _Toc461458871 \h 40Verification PAGEREF _Toc461458872 \h 41The role of the researcher and personal bias PAGEREF _Toc461458873 \h 42Chapter 4 PAGEREF _Toc461458874 \h 44Findings PAGEREF _Toc461458875 \h 44Application of distributed leadership to advance a culture of evidence PAGEREF _Toc461458876 \h 44The distributed leaders PAGEREF _Toc461458877 \h 45Distributed leadership advancing the institution’s culture of evidence. PAGEREF _Toc461458878 \h 49Attributes of distributed leaders who nurture a culture of evidence PAGEREF _Toc461458879 \h 58Distributive leadership themes identified with promoting a culture of evidence PAGEREF _Toc461458880 \h 60Effectiveness of the distributed leadership model and meeting culture of evidence expectations PAGEREF _Toc461458881 \h 62Summary of evidence and findings (Accreditation self-study team, 2016): PAGEREF _Toc461458882 \h 66Culture of Evidence PAGEREF _Toc461458883 \h 66Distributed Leadership PAGEREF _Toc461458884 \h 68Excerpts from the IEP PAGEREF _Toc461458885 \h 70Culture of Evidence PAGEREF _Toc461458886 \h 70Distributed leaders identified in the IEP included: PAGEREF _Toc461458887 \h 71Chapter 5 PAGEREF _Toc461458888 \h 73Discussion of the Findings PAGEREF _Toc461458889 \h 73Application of distributed leadership to advance a culture of evidence PAGEREF _Toc461458890 \h 73The distributed leaders PAGEREF _Toc461458891 \h 74Distributed leadership advancing the institution’s culture of evidence PAGEREF _Toc461458892 \h 75Attributes of distributed leaders who nurture a culture of evidence PAGEREF _Toc461458893 \h 78Effectiveness of the distributed leadership model and meeting culture of evidence expectations PAGEREF _Toc461458894 \h 80Practical Implications PAGEREF _Toc461458895 \h 81Theoretical Implications PAGEREF _Toc461458896 \h 82Recommendations for Future Research PAGEREF _Toc461458897 \h 83Limitation PAGEREF _Toc461458898 \h 84Conclusion PAGEREF _Toc461458899 \h 85References PAGEREF _Toc461458900 \h 86Appendix A PAGEREF _Toc461458901 \h 98Appendix B PAGEREF _Toc461458902 \h 99Appendix C PAGEREF _Toc461458903 \h 101Chapter OneIntroductionThis chapter in the case study exploring distributive leadership and its relationship to establishing a culture of evidence at a two-year public institution identified the primary issues and framework of the research. A culture of evidence is defined as using direct, valid, and reliable measures in a systemic, data-driven, comprehensive approach to understanding the quality of its institution (Dwyer et al., 2006). This chapter states the problem, rationale for the study, the purpose of the study, and guiding research questions followed by the research methods used in the study. A description and list of the main terms have been defined for clarification at the end of the chapter.Traditionally, the hallmark of visionary leadership and change is associated with higher education. This type of leadership concentrates on vision and extraordinary performance (Bass, 1985). A counter model is distributed leadership, which concentrates on leadership practice and rejects the hero leader (Spillane, 2006). Unlike the study of leadership, which focuses on the individual, distributed leadership examines the construct as interacting individuals (Bennett, Wise, Woods, & Harvey, 2003). A distributed model of leadership focuses on the interactions, rather than the actions, of those in formal and informal leadership roles (Spillane, 2008). Distributed leadership is primarily concerned with leadership practice and how leadership influences organizational and instructional improvement (Spillane, 2006). Spillane (2006) suggested that most accounts of leadership in higher education focus on people, structures, functions, routines, and roles, rather than leadership practice. This hierarchical model of leadership is abandoned in distributed leadership for a model that is focused on the goals of the group, rather than the actions of one (Copland, 2003). Distributive leadership supports the statement of Bolman and Gallos (2011) that no one person or group controls a higher education institution. Additionally, Wisniewski (2007) stated, “higher education must develop a cadre of academic leaders who can engage the institution and its faculty/staff in change and transformation processes” Higher education institutions truly must create an environment of collaboration and shared decision making in the contemporary world of higher education. Simple, straightforward rules of operation or universalistic principles of leading have been shown to be inadequate in a world of complex ever-changing conditions (Kezar, Carducci, & Contreras-McGavin, 2006).As such, the definition of higher education leadership for this qualitative study focused upon Spillane’s (2006) distributive leadership theory that hypothesizes three features as essential for leadership in education:Leadership practice is the central and anchoring concern for improvements in academic leadership.Leadership practice is generated by the interactions of leaders, followers, and their situations; each element is essential for leadership practice.The phenomenon both defines leadership practice and is defined through leadership practice.A distributed view of leadership incorporates the activities of multiple groups of individuals in an institution that works at guiding and mobilizing individuals within the institution to implement an initiative or change process. It implies a social distribution of leadership where the leadership function stretches over the work of a number of individuals where the leadership task is accomplished through the interaction of multiple leaders (Spillane et al., 2001). Distributed leadership, at the theoretical level, is an investigative framework for understanding leadership practice. Spillane et al. (2004) argued that the distributed perspective can serve as a tool for education leaders by offering a set of constructs that can be harnessed to frame diagnoses and inform the design process. In this respect, distributed leadership can serve as an exploratory tool that offers a lens on leadership practices within higher education. This tool provides an institution the opportunity to stand back and think about exactly how leadership is distributed and the difference made, or not made, by that distribution (Spillane & Harris, 2008).The purpose of the study was to explore how a distributed model of leadership practice influences the organizational development of a culture of evidence at a higher education institution.ProblemHigher education now faces an unprecedented period of accelerating change, shifting public attitudes, reduced public support, questions about priorities, and national demands for greater accountability (Wisniewski, 2007). Countless factors have been used by public and national accreditors to measure levels of institutional success. Accreditation criteria continue to evolve for higher education institutions in the United States, and requirements for accreditation have become more onerous with required follow-up reports affecting as many as two-thirds of institutions undergoing accreditation review (Suskie, 2014). Accreditation itself has not produced this current climate. These contemporary conditions are because of the confluence of forces that include public distrust of social institutions and authority in general (Eaton, 2015). This shift in evaluation criteria from both public and higher education agencies emphasizes the fundamental view that higher education institutions can no longer focus on quality without embracing a culture of evidence. As highlighted by the New Leadership Alliance, a culture of evidence is defined as evidence-based practices that indicate how effectively institutions are achieving their goals (2012). Culp and Dungy (2012) echoed this definition by describing a culture of evidence as a means of offering protection for higher education institutions by documenting the hard data that identifies the significant contributions the college makes toward the institution’s mission and goals. A culture of evidence also offers higher education professionals opportunities to examine their work, make it more effective and efficient, and increase the probability that they will design and implement programs, processes, and services that really matter (Culp & Dungy, 2012). Finally, Dwyer et al.’s (2006) definition of a culture of evidence is the demonstrated ability of a higher education organization to use direct, valid, and reliable measures in a systemic, data-driven, comprehensive approach to understanding the quality of its institution. For clarity, this study focused on Dwyer et al.’s (2006) definition of a culture of evidence.As expectations for institutional performance and accountability in higher education have expanded over the past two decades, campuses have struggled to develop processes and strategies to promote and demonstrate the effectiveness of their institutions (Burke & Associates, 2005). There are three major players in the national accountability movement: government, academia, and the general citizenry (LeMon, 2004). Almost every day there seems to be some new report or statement on the perceived shortcomings and failures of U.S. higher education (Suskie, 2014). Altbach spoke of the public’s expectation of institutions to provide evidence of learning as taxpayers realize the importance of students gaining knowledge for the good of society and the future of the country (2011). Stakeholders, including parents, employers, students, taxpayers, and the government, are questioning and sometimes demanding evidence from higher education.Understandably, moving an institution toward a culture of evidence, as defined above, is a complex process that requires leadership from a variety of people throughout the institution. Higher education institutions charge their leaders with the creation and execution of programs, services, and resources to promote the institution’s success. In an exploration of distributive leadership and the relationship it has in constructing a culture of evidence, it is critical to define leadership.Astin and Astin (2000) defined leadership in their report for the Kellogg Foundation as a process carried out by institutional leaders that is ultimately concerned with fostering change. As is the case in fostering a culture of evidence, leadership also implies intentionality, in the sense that the implied change is not random, but is rather directed toward some future end or condition that is desired or valued. This results in a purposeful progression that is fundamentally value-based. Consistent with the notion that leadership is concerned with change, Astin and Astin viewed a leader chiefly as a change agent (2000). Among the distinctive features of a leader, Onoye (2004) found that a higher education leader’s ability to work in a collaborative manner with others on campus was essential to solving problems. This supports Spillane’s (2006) definition of leadership as a relationship of social influence. Important to note is that leaders are not automatically those individuals on a campus who serve in a formal leadership capacity. Ingram shared that leaders are individuals who operate at all levels within a university (1993). This can include boards of trustees, chief academic officers, department heads, discipline coordinators, program chairs, full-time and part-time faculty, and unit or non-instructional directors and staff, and administrators. Thus, all people are potential leaders at an institution.Historically, American higher education has been considered to be the engine driving national and international development (Hauptman 2009; Rudolph, 1990). However, in the past 25 years evidence has suggested that there has been an unfavorable move for the U.S. global standing in higher education. According to a report by the NCAHE (2005):The U.S. does not lead the world in college completion rates.Four of 10 students fail to graduate from a college or university within 6 years of their initial matriculation.The majority of minority students in college do not graduate.Large developing economies (India and China in particular) are educating more students in science and math to compete in the global economy.Student costs for college have grown faster than the consumer price index and financial support programs, such as Pell Grants, have become woefully inadequate.One of four low-income students in the top quartile of academic ability and preparation fail to enroll in college within two years of high school graduation.In addition to the disparaging worldwide comparisons, there have been national economic challenges decidedly intensified by the most recent economic downturn that affected the United States and most other nations. Higher education is under a microscope because it is fast becoming a necessity for economic development, rather than “a luxury or a privilege reserved for the elite” (Duncan, 2013). The increasingly escalating costs for attending colleges and universities have resulted in a heightened awareness of the financial value added by attending a postsecondary institution and a call for increased transparency and accountability in higher education (Duderstadt, 2007; NCAHE, 2005). National figures, including former New York City Mayor, Michael Bloomberg, have provided advice to high school seniors that perhaps a service-related apprenticeship would be more appropriate for a student than going to college. The implication here is that the value of a higher education degree has changed as more students are taking on a larger burden of debt to pay for their own education. With the increasing costs of higher education and the decreased funding support from state and local governments, the bottom line metrics such as graduation, gainful employment, and meaningful salaries/wages must be taken into account (Islam & Crego, 2014).In late 2014, the federal government released its much anticipated framework that was introduced in 2013 by President Obama as a way to systematically rate higher education institutions. The rating was to promote transparency and higher education affordability. However, in the release, the Federal Education Department acknowledged how complicated it is to assess more than 7,000 colleges and universities. Released in December 2014, the framework is not an actual system, but rather a plan on how it could work. Many of the performance indicators mentioned by Islam and Crego as ideal metrics are the focus of the framework, particularly the number of students completing their degree (Associated Press, 2014). Many states have moved in this direction with several developing performance-based funding systems in an effort to enhance the strategic alignment of institutional outcomes with state needs and priorities. According to the National Conference of State Legislatures (NCSL), 25 of the 50 states had some form of performance-based funding in place as of March 2014 (AASCU, 2014).Given these national concerns, as well as potential funding implications, it becomes all the more important to explore how organizational leaders in higher education create an evidence-based culture that meets accountability expectations raised by its stakeholders and the nation. Additionally, research of the literature on organizational culture completed by Schneider, Ehrhart, and Macy (2014) identified the need for more research that clarifies how organizational leaders influence culture. It is imperative to identify best practices and models of leadership practice to meet contemporary expectations surrounding a culture of evidence.Purpose of the StudyA national focus emphasizing that higher education must provide evidence of its effectiveness has grown over the last several decades, reaching a pinnacle in 2006 with the Secretary of Education’s Commission report, A Test of Leadership: Charting the Future of U.S. Higher Education. Within the document, the accreditation process was called upon to provide more robust accountability, to strengthen rigor and thoroughness of its reviews, to take responsibility for what was characterized as higher education’s limitation in serving students, and serve as a catalyst, not a barrier, to education innovation (Suskie, 2014). Providing evidence related to effectiveness continues with regular compliance requests from the federal government via regional accreditation bodies, as well as increased calls for accountability from previously mentioned stakeholders. Operating within a culture of evidence allows for higher education professionals to remain in a continuous professional and personal learning loop; ask questions that matter; build on successes; learn from failures; and design and implement programs, processes, and services to meet institutional expectations, both internal and external (Culp & Dungy, 2012). The regional accrediting bodies are but one of the mechanisms in which institutions respond to meet external expectations. For example, in 2016, all institutions in the United States will be asked to provide evidence for the following: the institutional record of all student complaints; consumer information for students and members of the public as required by the Higher Education Opportunities Act; institutional standing with state licensing and other accrediting agencies; and, contractual relationships with third parties who will deliver academic content (MSCHE, 2014). These types of federal requests for evidence are filtered to the individual institutions by the regional accreditors almost immediately. Institutions that do not provide enough evidence are cited by accrediting bodies for inadequacies. The most common citations focus on a lack of evidence related to student learning and resource allocations, how assessment is used for continual improvement, and documentation demonstrating the existence of assessment processes and systems. These citations equated to an institution’s need to change and create or improve processes, and systems that foster an organizational culture of evidence.Accreditation bodies expect institutions to develop and foster a culture of evidence by clearly articulating institutional and program-level goals; ensuring that the program and resources are organized and coordinated to achieve institutional and program level goals; that the institution is indeed achieving its mission and goals; and, that institutions are using assessment results to improve student learning and otherwise advance the institution (MSCHE, 2005). Collectively, these expectations directly influence individuals within an institution to be involved in supporting a culture of evidence. Nurturing and supporting such a culture is a complex undertaking and the external requests for increased evidence and accountability call for the development of an array of new processes and structures. Motivating faculty and staff to engage in institutional transformation by contributing to and supporting institutional research and assessment in their daily practice is the challenge facing many institutions (Morest, 2009). Faculty and staff need motivation so they can strive to attain specific objectives, such as fostering a culture of evidence (Siddique, Aslam, Khan, & Fatima, 2011). According to the MSCHE, the accrediting region of the institution where this case study took place, after several years of this accreditation message that institutions must provide a culture of evidence, some institutions still have not developed this culture.Middaugh, former chair of the MSCHE, explained at his 2012 workshop on compliance evidence that if an institution is able to demonstrate meeting characteristics of excellence in strategic planning, institutional effectiveness, and student learning, all of which are evidence-based standards, then the additional standards for accreditations will be met. In the last three to four years, 80% of the two-year public institutions within the state’s higher education system for the case study institution have had a required follow-up report related to assessment processes, lack of documented evidence, and/or weak systems of collecting evidence. Institutions are expected to solve these shortcomings.As defined earlier, establishing a culture of evidence is a critical national issue facing all areas of higher education. Institutions are moving toward more innovative and unconventional models and this rapid evolution of higher education as an industry in and of itself further emphasizes the need to establish a culture of evidence. The climate of higher education demands continual improvement by fostering a culture that supports accountability and evidence processes so that the organization remains in good standing with its regional accrediting body, has a good reputation among stakeholders, maintains a strong reputation for graduating career-ready students, and continues to be an institution of choice in the very competitive field of higher education.Higher education experts, such as Michael Middaugh (2012), Linda Suskie (2014), Trudy Banta, and Catherine Palomba (1999), have repeatedly promoted leadership as a critical variable in defining the success or failure of an institution in meeting evidence and accountability expectations. It has become increasingly important to explore how leadership in higher education fosters a culture of evidence that satisfies accountability expectations. There is little evidence of a relationship between distributed leadership and institutional achievement (Hartley, 2007). The purpose of this study was to explore how a distributed model of leadership practice influences the organizational development of a culture of evidence at a higher education institution.Theoretical FrameworkThe relevant theories for this study include Schein’s organizational culture theory and 2010 culture-embedded mechanisms, distributive leadership practice as defined by Spillane (2008), Suskie’s leadership and culture of assessment concept, and Copland’s preconditions of succesfull distributed leadership.Schein’s theory that an organization’s culture is a dynamic model that is learned, passed on, and changes was explored within this case study (Schein, 1984). He further spoke to the maintenance of a culture and the importance of the leadership of an organization. According to Schein (2010), leadership and culture management are essential to understanding organizations and making them effective and we cannot afford to be complacent about either one.Applying Spillane’s (2006) theory of distributive leadership acknowledges how leadership activities are widely shared within an organization when promoting a culture of evidence. Distributive leadership embraces the practice of a flatter, lateral leadership model that is identified and explored. The study of distributed leadership in higher education expands what is traditionally a very narrow focus on higher education hierarchical leadership.Suskie (2009) created a series of strategies for campus leaders to enhance an institution’s culture of evidence. These strategies further investigate distributive leadership and its role in promoting a culture of evidence.Finally, Copland (2003) has identified three preconditions that must exist in an organization if distributed leadership is to be successful. Copland’s themes were investigated through this case study to further identify and support the distributed leadership model established at the institution.The overall research questions explored include the following:How does this institution apply distributed leadership to advance a culture of evidence?Who are the institutional members involved and engaged as distributed leaders at this institution?What demonstrated behaviors and actions do these distributed leaders use to nurture the current culture of evidence at this institution?How effective is the distributive leadership model in meeting the expectations associated with a culture of evidence?What distributive leadership themes are identified with promoting a culture of evidence?Leadership and CultureThe relationship between leadership and culture is multifaceted, and this interplay is a culture-influencing activity (Alvesson, 2012). To explore distributive leadership and its relationship to a culture of evidence within higher education, it is imperative to review and define organizational culture. Because culture is not a physical being, it is difficult to perceive organizational culture through analyzing its definitions and explanations (Deal, 1995). The term organizational culture is an umbrella concept for a way of thinking that takes a serious interest in cultural and symbolic aspects in organizations (Ashkanasy, Vildermon, & Peterson, 2011). As Denison (1990) concluded from a?variety of studies, culture plays a significant role in the effectiveness of organizations. Additionally, Evans (1996) addresses the power of culture within the framework of organizational culture from the perspective of both the process itself and the products it yields. Culture is a means of not only transforming people’s behaviors and attitudes but also forms learning models. It is an influential factor in determining the reactions of organizational members to changes in the environment (Flint, 2000).Remaining ChaptersThroughout this chapter, the discussion has focused on how a national accountability shift has taken place with higher education as the topic of scrutiny. Institutions are tasked with meeting accountability expectations, yet, because of campus cultures and leadership structures, there is no clear model or approach to apply that all institutions can utilize. Thus, research through this case study provided insight into how the culture of evidence has been supported by a distributed leadership structure.The second chapter of this dissertation includes a review of relevant literature to distill the research by scholars and assessment leaders on the topic of distributive leadership and a culture of evidence. It identifies essential resources and reflects upon the major categories related to the research questions. Additionally, it reflects lightly upon the relationship between national culture and organizational culture, as well as leadership and organizational culture. Chapter Three speaks to the methodology of this study and why it was selected to complete the research. In this chapter, the role of the researcher in this study is discussed, as well as any limitations that may affect the results.Unique TermsThese terms are taken from Suskie’s five dimensions of quality (2014) and will be referenced throughout this study.Academic freedom is the right to engage in research, scholarship, inquiry, and expression without fear of repercussions.Accountability is demonstrating to one’s stakeholders the effectiveness of one’s college, program, service, or initiative in meeting its responsibilities.Accreditation standards, criteria, and requirements accreditors use different languages to describe expectations. The generic term requirements and the MSCHE term standards were utilized for this study.Assessment is associated with many aspects of accreditation, particularly related to student learning assessment. In this study, the term assessment was employed in reference to the actual doing of an evaluation, whereas evidence is what is produced from the process of assessment.Closing-the-loop refers to the fourth step in a typical four-step quality improvement cycle that includes using the evidence to improve the quality.Culture of evidence is the demonstrated ability of a higher education organization to use direct, valid, and reliable measures in a systemic, data-driven, comprehensive approach to understanding the quality of its institution (Dwyer et al., 2006).Data are a set of numbers where the information is used to make clear the story that the figures are telling.Distributed leadership is a perspective that focuses on leadership practice; a product of the joint interactions of leaders, followers, and aspects of their situation such as tools and routines (Spillane, 2006).Institutional assessment refers to an institution’s ability to measure how it achieves its goals.Institutional effectiveness takes institutional assessment to the next level of not only its purpose and goals, but also meeting stakeholder needs, serving the public good, and deploying resources effectively, prudently, and efficiently.Institutional governance is a balance of power that leads to collaborative decisions.Performance indicators, metrics, and performance measures are terms for measuring the quality and effectiveness of an institution.Quality assurance describes the systems and processes used to ensure higher education quality.Transparency refers to making evidence clear, concise, easy to locate, and relevant to the stakeholders.Chapter TwoReview of the LiteratureThis chapter begins with the topic of organizational culture theory with a focus on Schein’s work. As an organic element, organizational culture is reviewed as affected by leadership and then through the relationships organizations have with national culture. This includes a description of the changing demands, interests, and needs of higher education institutions to produce evidence to meet national expectations and how this has led many colleges and universities to become more focused on the heightened accountability movement. It then defines and discusses distributive leadership as identified by Spillane. Finally, there is a discussion about Suskie’s principles on how organizations can develop cultures centered on evidence, what responsive organizations do to meet the demand for higher accountability measures, and how leaders can enhance a culture of evidence.Leadership and organizational culture. In a 2013 review of empirical literature on leadership and organizational culture by Schneider, Ehreart, and Macey, Schein’s 2010 work has been cited as the most common source for organizational assumptions and values. His culture-embedded mechanisms describe what leaders do to articulate their values and reinforce them. Schein argued that these culturally-embedding mechanisms, primary and secondary, have an effect on culture to the extent that they have been found to be useful by the organization in coping with the world in which it functions (Schein, 2010).Primary mechanisms. Primary embedded mechanisms speak to the visible actions of a leader. There are five main areas identified. What a leader identifies as a priority and shows passion for is the first primary mechanism. Emotion and reaction to crises expose the leader’s values and form the second primary mechanism. A leader’s action, or role modeling, is often emulated by people in an organization as the correct behavior is third. The final two primary mechanisms identified by Schein include allocation of rewards and criteria for selection or dismissal. Having processes that include desired behaviors and consider the personality of the person involved is embedded within these last two criteria (Schein, 2010).Secondary mechanisms. The secondary mechanisms identified by Schein (2010) are considered to be methods that a leader may indirectly change in an organization’s culture. First, the actual design of the organization’s structure will have a subtle effect on how the organization operates. Second, procedures and systems of an organization can be aligned to meet desired cultural goals. Third, the layout of a facility, such as office placements, can subconsciously reflect the values and culture of the organization. All organizations have stories, myths, and legends about important events and people. This fourth mechanism can change an organization’s culture and can be powerful when they have grass-roots support and credibility. Finally, the fifth secondary mechanism includes the formal statements of the organization’s philosophy, mission, and charter. These are the public face of the organization.The additional two studies reviewed by Schnieder, Ehrhart, and Macy (2013) focused upon leader behaviors, not values. The first of the two was a study completed by Ogbonna and Harris (2000), which focused on the effects of three leadership styles on organizational performance. They found partial support for culture as a mediator, with some leader behaviors having direct effects on performance (Schnieder, Ehrhart, & Macy, 2013). The last study reviewed by Schnieder, Ehrhart, and Macy (2013) concentrated on the consistency of leadership and the strength of the organization’s culture. A study completed by Tsui et al. (2006) included interviews that revealed some leaders were able to work in the background to build strong systems that resulted in stronger organizational cultures. Schnieder, Ehrhart, and Macy (2013) identified the need for additional research that clarifies how leaders influence culture, especially research focused on the effects of Schein’s (2010) culture-embedded anizational culture theory. Many have investigated the theory of organizational culture to explore its possible relationship to organizational performance and effectiveness. Coupled with organizational studies literature, these studies highlight the significance culture has in dictating an organization’s ability to survive and succeed (Aarons, 2007; Ferlie et al., 2002; Klein & Sorra, 1996; Kotter, 1995). One such study, The Change Masters (1983) by renowned Harvard Business School Professor and author, Rosebeth Kanter, studied organizational change in relation to cultures that promoted innovation implementation as opposed to those that did not. The findings demonstrated that positive organizational cultures are linked to increased staff alignment, resulting in enhanced organizational effectiveness, heightened consensus regarding strategic direction, increased employee productivity, and advanced levels of employee commitment (Kanter, 1983). In contrast, research by Barney (1986) showed that negative organizational cultures tend to negate innovation/change initiatives.In his MIT Sloan Management Review article, Coming to a New Awareness of Organizational Culture, Schein (1984) explained that to thoroughly understand a culture and to truly discover a group’s values, it is imperative to delve into the underlying assumptions, which are typically unconscious but which actually determine how group members perceive, think, and feel. In his analysis of the transmission and maintenance of culture, Schein focused on the role of leadership. Many powerful ways exist in which leaders are able to embed their own assumptions in the ongoing daily life of their organizations: Through what they pay attention to and reward, through the role modeling they do, through the manner in which they deal with critical incidents, and through the criteria they use for recruitment, selection, promotion, and excommunication, they communicate both explicitly and implicitly the assumptions they actually hold (Schein (1984).Edward Schein (1990) concluded that the strength and type of culture are critical to the organization’s success and survival. This agrees with Kanter’s findings noted above (1983). According to Schein (1990), institutional leaders should put their energies into developing a strong organizational culture that supports the following activities: (a) managing change; (b) achieving goals; (c) coordinating teamwork; and (d) customer orientation in the organization—activities that he believed would contribute to organizational effectiveness. Schnieder, Ehrhart, and Macy (2013) further developed the linkage of leaders and organizational culture by stating how organizational culture concerns the implicit values, beliefs, and assumptions that employees infer and base their inferences on the stories, myths, and socialization experiences they have and the behaviors they observe on the part of leaders.National culture and organizational culture. National culture has been studied to examine whether it affects an organization’s culture. In general, the results show that when national culture is correlated with the organizational culture of a company, a significant main effect invariably is found (Gelfand et al. 2007). Sagiv et al. (2011) reported that within organizations and nations there is also significant variability in individual values. One proposal by Martin (2002) for reviewing a culture is to do so at different levels. A macro/micro lens allows a review of organizational cultures and differences between organizations of a nation. One level deeper, the micro level, would reveal subcultures within an organization. Schnieder, Ehrhart, and Macy (2013) expressed the need for additional multilevel research on organizational culture.National culture of higher education accountability supports a culture of evidence. The escalating emphasis on accountability is related to the national perception that colleges and universities do not plan carefully or assess their effectiveness (Hollowell, Middaugh, & Sibolski, 2006). Review of the nation’s regional accrediting bodies, the agencies responsible for higher education quality, reveals the common accountability expectations that institutions demonstrate evidence of assessments used for improvement, planning is taking place based on these assessment results, effectiveness is documented, and institutional resources are analyzed and allocated as part of the accountability process. It is easy to understand the importance of addressing accountability as it directly affects an institution’s accreditation status, which in turn, directly affects the federal funding for and the reputation of institutions.Accreditation began as a way for higher education institutions to be accountable and self-regulate the quality of higher education institutions in a peer review process in the 1950s (Brittingham, 2008). The American Council on Education (ACE) released a report of their national Task Force on Institutional Accreditation in June 2012. The goal of this task force was to bring forth recommendations for accrediting organizations to examine with the end product of implementing changes in each of six themes/categories. The six themes were: (a) increase the transparency of accreditation and clearly communicate its results; (b) increase the centrality of evidence about student success and educational quality; (c) take prompt, strong, and public action against substandard institutions; (d) adopt a more risk-sensitive approach to regional accreditation; (e) seek common terminology, promote cooperation, and expand participation; and (f) enhance the cost effectiveness of accreditation (ACE, 2012). Each of these themes spoke to how institutions need to foster a culture of evidence that meets accountability expectations. This directly affects higher education leaders across the nation and their perceptions of accountability and fostering a culture of evidence.Suskie (2014) highlighted the current national culture of evidence that pressures institutions to be accountable for spending taxpayer dollars appropriately, along with the need to ensure that tuition dollars are being spent well by families and not leaving students with high debt. Suskie (2014) indicated three general issues that affect accountability and a culture of evidence at the national level. These are economic development, return on investment, and the changing college student (Suskie, 2014). Accreditation agencies are called upon to ensure that these issues identified by Suskie are being met because accreditation remains a well-regarded seal of approval on college quality (2014). This is because the accreditation process respects and facilitates the diversity and complexity of U.S. colleges and accreditation can have a significant effect, forcing necessary institutional improvements (Suskie, 2014). Thus, it serves as a measure of a culture of evidence.Culture of evidence. This section reviews current literature that supports the Dwyer et al. 2006 definition that a culture of evidence demonstrated by a higher education organization uses direct, valid, and reliable measures in a systemic, data-driven, comprehensive approach to understanding the quality of its institution, this section reviews extant literature that supports the 2006 definition.A seasoned educator, former accreditation vice president for the MSCHE, and now an international consultant on best practices on accountability and accreditation, Suskie (2014) has captured the development of higher education expectations in several publications. Her most recent contribution to the field of higher education comes in the form of a book entitled Five Dimensions of Quality, A Common Sense Guide to Accreditation and Accountability (Suskie, 2014).This resource for higher education leaders speaks to the commitment institutions must now subscribe to in an era focused on outcomes and a culture of evidence to meet those outcomes. Suskie (2014) shared the major criticisms of accreditation as an institution in and of itself. Perceived as insufficiently rigorous, inconsistent, and unreliable, Suskie noted that many critics believed colleges were too complex for accreditors to ensure the across-the-board quality the national culture was hoping to achieve. Additionally, the lengthy process was seen as slow to remove the accreditation of sub-par colleges and, as a pass/fail exercise, it truly does not recognize innovation and excellence. Even with these criticisms, Suskie identified that accreditation holds incredible value for institutions and, as such, has a significant effect on forcing necessary improvements. According to Suskie, this commitment to meeting accreditation standards results in a quality college, one that is not static and one that can respond to the fast-changing world of the higher education in the 21st century (2014). The focus on a culture of evidence demonstrates publicly and clearly that an institution is a good investment and convinces students, government officials, donors, parents, and accrediting bodies to support it and invest in it (Suskie, 2014).Suskie (2014) identified four pillars that foster a culture of evidence in a quality institution:Identifying success—A quality institution will continually measure its success as it progresses in meeting campus goals. Suskie noted that the measures must fit the institution’s purpose, values, goals, and stakeholder needs. This includes how institutions provide performance indicators that indicate effective and efficient deployment of resources, as well as its students’ progress toward meeting the stated learning outcomes.Useful evidence—Suskie asserted institutions should recognize whether their evidence is of sound quality by reflecting on whether the evidence could be used to inform plans, goals, decisions, and actions. Any measurement is only as good as the goal it is intended to measure. Genuine inquiry will produce evidence that is of genuine interest to the college community and the college’s key stakeholders. Evidence needs to be reasonably accurate and truthful, not perfect.Targets set and justified—Results without benchmarks will cause a culture of evidence to halt. Institutions need a good sense of the kinds and levels of results that are good enough to conclude that the institution is successful in achieving its goals, as well as the kinds and levels that indicate that institutions are not where they should be. Suskie emphasized that comparisons of numbers are essential to developing meaning from the data, and the benchmarks need to be justifiably rigorous. This can occur internally or with external comparisons, where targets are informed from peers. Important to note is that no one perspective is perfect, and institutions need to choose the perspective that is most meaningful, useful, and informs improvement.Transparency—Suskie stressed that sharing results and evidence in a clear, easy to find, and relevant manner was an aspirational element of a culture of evidence. A strong culture of evidence shares results with various stakeholders, not just accreditors. Different stakeholders want and need different information, at various levels of detail, and in different formats.Suskie’s concept: Enhancing an institution’s culture for evidence. Suskie’s work has supported Schein’s (2010) thoughts about how organizational leaders’ actions and values influence a culture. She has identified key areas to consider about higher education leaders and their effects on a culture of evidence. According to MSCHE (2008), as adopted by Suskie, higher education leaders, including distributive leaders, who facilitate a culture of evidence should reflect and embrace the following actions and values:Show a personal commitment to assessment and a culture of evidence.Stimulate interest in assessment and a culture of evidence.Recruit people that support a culture of evidence.Reward and incentivize those engaged in assessment and a culture of evidence.Value professional development to strengthen the culture of evidence.Allocate resources that support a culture of evidence.Supply time for faculty/staff to engage and complete culture of evidence initiatives.Monitor measurable outcomes that relate to a culture of evidence.Celebrate, reward, and recognize achievements that support a culture of evidence.A review of an accrediting body’s publications, the MSCHE, illustrated that the organization accredits 521 institutions that enroll over five million students annually (MSCHE, 2010). The institutions accredited by MSCHE in 2010 received over $14 billion in federal Title IV funds (student financial aid; 2010). It was reported by the agency that non-compliance actions taken by MSCHE against its accredited colleges and universities between the years of 2005 and 2010 increased 233% for institutions receiving warnings. Equally, the number of accredited institutions placed on MSCHE accreditation probation had a 233% increase from 2008 to 2010 (2010). In recent years and as reported at the 2014 MSCHE annual conference, the number of institutions placed on accreditation warning has dropped since 2010. Follow-up reports remain steady with a new focus on the quality of excellence related to evidence provided by institutions (Sibolski, 2014).Mentkowski et al. (1991) contended the reason assessment has failed to have the effects many had hoped for is that institutional cultures do not allow other ways of knowing to surface in the evaluation process. Banta and Moffet (1987), Breschiani, Gardner, and Hickmott (2009), and Suskie (2009) have noted the difficulty in establishing evidence that documentation of student learning assessment actually improves student learning at any level.Distributive leadership. Higher education leaders are vital elements in meeting the demands of evidence-based accountability (Maki, 2006; Middaugh, 2012; Suskie, 2014). Spillane (2006) offered what had been identified as an alternate view of leadership in education: A distributed leadership perspective moves beyond the Superman and Wonder Woman view of education leadership. Furthermore, Spillane (2006) found the heroics of leaders as a focus of study problematic for four primary reasons. First, heroic epics typically equate education leadership with education administrators and their courageous actions. Second, most accounts of education leadership pay limited attention to the practice of leadership. Third, when education leadership practice is considered, it is depicted in terms of the actions, great or otherwise, of one or more leaders. Concentrating on distinct actions fails to capture the importance of interactions. Fourth, in the heroic leadership tradition, “leadership is defined chiefly in terms of its outcome” (Spillane, 2006, p. 4).Gressick and Derry’s (2010) definition of distributed leadership mirrored that of Spillane. They described that in a study to examine emergent leadership in small online collaborative learning groups of math and science teachers it was found that leadership was a social process, with all members of all groups taking some part in their group’s leadership (p. 260). The perspective about distributed leadership supports the notion that leadership does not exist in the offices of the formal leaders in the institution, as traditional definitions of higher education leadership suggest. A distributed understanding of leadership identifies that leading institutions in higher education require many leaders. A distributed leadership perspective acknowledges and incorporates the work of all the individuals who play a part in the practice of leadership (Spillane, 2006).Spillane (2006) suggested that there were three leadership responsibility arrangements in school settings:Division of labor—leaders in different positions perform various leadership functions with considerable overlap among positions—a neat division of labor as it pertains to how leadership is not standard operating procedure in higher education.Co-performance—two or more leaders performing a leadership function or routine in a collaborated fashion—with evidence of co-performance for various leadership routines.Parallel performance—leaders work in parallel to execute the same leadership functions or routine—they sometimes duplicate each other’s work.In contrast, Bolden (2011) offered a strong cautionary warning, that while leadership may be distributed, power in an institution often is not. The concept of distributed leadership may be invoked by senior managers to encourage engagement and participation in institutional activities while masking substantial imbalances in access to resources and sources of power (p. 260). Therefore, trust is paramount in utilizing a distributive leadership model.Copland (2003) has identified three preconditions that must exist in an organization if distributed leadership is to be successful. They include:the development of a culture within the institution that embodies collaboration, trust, professional learning, and reciprocal accountability,strong consensus regarding the significant problems facing the organization, anda need for rich expertise with approaches to improving teaching and learning among all those working in the institution.To be successful and achieve the effects that have been promised, distributed leadership needs to recognize the political nature of leadership within organizations and the imbalances in the distribution of power and influence (Gressick & Derry, 2010).Conclusion. In conclusion, the literature supports the need for higher education institutions to foster a culture of evidence has grown in the last few decades. The available evidence highlights the potential for distributed leadership to create a culture of evidence that meets stakeholder expectations. Strong leaders are needed to facilitate, support, and sustain the call for accountability by numerous constituents. The literature indicates that a culture of evidence under a distributive leadership model may be effective. What is not indicated in the literature is who these leaders are within a higher education institution, how they foster a culture of evidence, and whether the distributive model is effective in meeting the expectations of a culture of evidence.Chapter ThreeMethodologyIn this study, a qualitative research design was used to analyze a culture of evidence and to explore the effects of organizational leadership on that culture. Using a qualitative ethnographic method, this exploratory case study was designed to explore distributive leadership and how it affects a culture of evidence through the lens of participants, as well as a study of archival records related to a culture of evidence at one higher education institution. This method allows for maximum descriptive data during a defined period of time; it considers the voices of relevant stakeholders and the exchanges between them, and it identifies direct evidence of the culture of evidence through institutional records and policies.According to Merriam (1998), a study that focuses on school culture or a group of students or behaviors in the classroom should use an ethnographic case study format. A case study provides the researcher with real life events in a meaningful and holistic way when the case is not distinctive in its context (Yin, 2003). Anthropologists who explore communities of people and their culture frequently use ethnography. Ethnography as a body of methods analyses a specific culture from the viewpoints of its members (Hammersley & Atkinson, 1995; Hatch, 2002, p. 21). This study is both a case study since its scope is limited to one institution and an ethnographic study because it strives to make a description of the particular institution through a cultural contextEthnography. When researching a group that shares a culture, such as faculty and staff at one institution, Creswell (2012) stated that an ethnographic study helps to provide an understanding about a larger or representative activity or process. The researcher’s role in this type of study can be as a participant or as an observer; however, the researcher should have long-term access to the group that shares the culture. Educational anthropologists fall under the category of ethnography because of the content they research. Educational anthropologists focus on subculture within schools and institutions. Creswell stated that educational anthropologists can research micro-ethnographies of small work groups within classrooms or whole schools/institutions (2012).Fieldwork is essential to ethnography. Wolcott (1994) defined fieldwork as a form of inquiry that required the researcher to be immersed personally in the ongoing social activities of those being studied. Whitehead (2005) identified several attributes associated with ethnography. First, ethnography is a holistic approach to the study of cultural systems that allows for understanding about sociocultural contexts, process, and meanings. It is a highly flexible and creative process of discovery, making inferences, and continuous inquiry in an attempt to achieve emic validity while researching the culture from both emic and etic perspectives. Emic and etic perspectives are simply defined as research perspectives from within the group (i.e., emic) or from outside the group (etic; Kottak, 2006). Finally, ethnography requires continual recording of field notes that result in an integrative, reflexive, and constructivist process.Observations in this case study included participant observations that assisted in reaching an emic or indigenous sense of the social setting being studied (Whitehead, 2005). The etic research approach allowed for the focus of an outside observer to interpret the culture and emphasize what was important (Kottak, 2006).As part of the observations, Spradley (2004) identified nine phenomena that might occur in a setting with human interaction. Identifying the behaviors in the form of acts, activities, and events or related events are included in the phenomena. Additionally, the researcher should identify the physical space, objects in that space, and the time the observations are taking place. Finally, identify the goals associated with any observed behaviors and whether the behaviors are carried out with any emotion. Whitehead (2005) added to these by stating that the language used by participants, the interactive patterns, the discourse content, and the participant groups in the setting also provide meaning to the study.Little research has been conducted about academic leadership affecting a culture of evidence. Wood (2006) completed two ethnographic interview studies to understand the efficacy of the faculty development process on assessment. Wood contributed a chapter to the Driscoll, deNoriega, and Ramaley (2006) book on accreditation in which Wood examined the use of interview studies to understand and document the effects of campus activities and processes. Wood utilized interviews as a tool for probing, for deeper self-study, and for reflection and discovery beyond the surface-level assessment. In using this kind of study, the researcher can analyze, understand, critique, and appreciate the benefits and challenges of the activity to promote long-term improvement (Wood, 2006). Wood contended the use of interviews versus the utilization of a survey would allow for a more in-depth exploration of faculty insights, perceptions, and evaluation of the assessment experiences. The findings from the ethnographic studies yielded common themes between the two interviews: benefits and value of assessment work, building consensus on what learning outcomes mean to faculty, concerns about the agenda of external forces, and consideration of learning outcomes from a student’s perspective, bias, collaboration, and influences of faculty status on participation (Wood, 2006). These studies have served as a way to gather information about perceptions and experiences that might easily be missed in other types of data collection methods.Case study. A case study provides the research with real life events in a meaningful and holistic way (Yin, 1993). Using multiple sources of data, the case study has been designed to bring out the details from the vantage point of the participants or those directly immersed in the organization. Important in selecting a case study design is to select a research site or, in this case, study an institution that is representative of other institutions. Selection of this institution included the following criteria:regionally accredited,the institution has engaged in accountability initiatives in response to accreditation actions, andinstitution volunteers to participate in the study.In a case study, multiple individuals and groups must be considered, as well as the interaction between them. This multi-perspective analysis allows for the voices of many to be included while including the interaction between all relevant groups of individuals. Case studies tend to be selective, concentrating on one or two issues that are central to understanding the organization being examined.The site location is a two-year public institution. A regional body has accredited the organization for over five decades; yet, in 2011, 2013, and 2014 the institution was required to complete follow-up reports for the agency, one for each year identified. These reports have been associated both with accountability and providing enough evidence to ensure compliance with several accreditation standards.Data collection. Triangulation of data can occur with multiple pieces of evidence. This can include data, theories, methodologies, observations, interviews, and other sources. In this study, three essential elements were identified and used for the triangulation: records, interviews, and observations. The need for triangulation arises from the ethical necessity to confirm the validity of the processes. In case studies, this can be accomplished by using multiple sources of data (Yin, 1984). The goal of a case study is to establish meaning and explore a particular topic. Yin’s (1984) pattern matching from data sources is applicable to case studies within higher education. Yin (1994) identified six sources of evidence as a basis for collecting the data; this case study examined the following sources of data: documentation, archival records, interviews, direct observation, and participant observation. Not all of these sources need to be used in every case study; however, the importance of using multiple sources of data adds reliability to the study (Yin, 1994). Many of these sources of evidence can be used in concert, but it is important to note that no single source has a complete advantage over the others. As such, in this case study, as many sources as are relevant to the study were used. Table 1 indicates the strengths and weaknesses of each type.Table 1.Types of Case Study EvidenceSource of EvidenceStrengthsWeaknessesDocumentationstable: repeated reviewunobtrusive: exist prior to case studyexact: names etc.broad coverage: extended time span retrievability: difficultbiased selectivityreporting bias: reflects author biasaccess: may be blocked Archival recordsSame as aboveprecise and quantitative Same as aboveprivacy might inhibit access Interviewstargeted: focuses on case study topicinsightful: provides perceived causal inferences bias due to poor questionsresponse biasincomplete recollectionreflexivity: interviewee expresses what interviewer wants to hear Direct observationreality: covers events in real timecontextual: covers event context time-consumingselectivity: might miss factsreflexivity: observer’s presence might cause a changecost: observers need time Participant observationSame as aboveinsightful into interpersonal behavior Same as abovebias because of investigator’s actions Note: Yin (1994)A brief description of each source of evidence is provided below:Documents can include letters, memoranda, agendas, study reports, or any items that could add to the database. The validity of the documents will be carefully reviewed to avoid incorrect data being included in the database. One of the most important uses of documents is to corroborate evidence gathered from other sources. The potential for overreliance on documents as evidence in case studies has been criticized. There could be a danger of this occurrence if the investigator is inexperienced and mistakes some types of documents for unmitigated truth (Yin, 1994).Interviews are one of the most important sources of case study information. The interview could take one of several forms: open-ended, focused, semi-structured, or structured. Semi-structured interviews will be used to take advantage of the flexibility of respondents’ responses. It is expected that stories will be used as a tool during the interview process. Telling stories is one of the oldest traditions in all cultures of the world (Mears, 2008). The use of tape recorders during the interviews was left to the discretion of the interviewees.Direct observation in a case study occurs when the investigator makes a site visit to gather data. The observations can be formal or casual activities, but the reliability of the observation is the main concern. Using multiple observers is one way to guard against this problem. However, only one observer will be used for this case study, and she will be immersed in the culture as a participant, not as a visitor.Participant observation is a unique approach to observation and will be part of this case study. This technique is used in studies of organizations, and frequently in educational anthropological studies. The main concern is the potential bias from the researcher as an active participant.Interviews. Utilizing purposeful sampling, eight individuals from different subgroups across campus were invited to be interviewed and all eight accepted and participated in the interview process. Subgroups include administration, faculty, both full-time and part-time staff, and non-teaching professional faculty. The selected individuals from the faculty, staff, and administration of the college have worthy and significant viewpoints of the accountability processes. They are qualified individuals who have experience with the leadership abilities used by different groups or individuals on the campus. Multiple perspectives offer greater validity to the findings (Hatch, 2002). According to Fraenkel, Wallen, and Hyun (2012), the researcher uses his or her judgment to select a sample that they believe, based on prior information, will provide the data they need from a limited pool of candidates. The following criteria were used to identify the interviewees:They had three or more years with the institution under study.They functioned in a role beyond their normal job description.They were selected from a cross-representation of functions within the institution.Interviews were completed face-to-face, on campus, in a venue that was confidential. Blocks of 60 minutes were scheduled with the expectation that exceeding the time frame may occur to capture rich points of discussion as needed. To maintain the essence and accuracy of the interviewee’s words, the interviews were audio recorded. The recordings are secured in a locked area of the researcher’s office and have a purge date of one year. Semi-structured interviews were employed to develop thick descriptions and the text from the interviews was analyzed (Creswell, 2012; Hesse-Biber & Leavy, 2011). Using a semi-structured design permits the exploration of new topics by allowing a conversation to develop that the interviewee finds most relevant (Hesse-Biber & Leavy, 2011). The data collection and analysis process was interactive, which is referred to as the iterative process (Hesse-Biber & Leavy, 2011). To encourage the participants to characterize their insights of organizational leadership and its influence on creating a culture of evidence, open-ended questions were used for this case study (Appendix A). Using a semi-structured interview model, the questions served as a guide to foster discussion as a potential way to uncover unanticipated themes or topics. This allowed the researcher to move through questions and engage in dialogue that best brought out descriptive data (Gioia & Thomas, 1996).Using a panel of experts, the semi-structured interview questions were vetted as a way to complete a content analysis of the interview questions. These experts were not included in the study and served on the institution’s assessment committee at some point in their respective careers. The content analysis allowed others to assess informally whether the tool accurately measured or would be effective in obtaining what was intended (Creswell, 2002). A small pilot of questions was conducted with two volunteers, and neither person was interviewed for the research.Confidentiality. The following strategies have been employed to maintain confidentiality. Each participant received and completed an informed consent form. The informed consent form included an explanation of the purpose of the study, procedures, benefits, the rights of the participant to withdraw at any given time, as well as anonymity or confidentiality of their identity. A copy of the informed consent form was provided to the participant. To ensure anonymity, pseudonyms were assigned to each interviewee. Audio tapes allowed for precise transcriptions, along with ample notes to be taken as a way to expand the data and provide additional researcher perspective and reflection. To maintain integrity and accuracy of the transcripts, the researcher transcribed the interviews, which provided for the possibility of enhancing the trustworthiness and validity of the data-gathering techniques (Hesse-Biber & Leavy, 2011, p. 304). The transcription was processed within three weeks of the interview; the interviewee was able to verify the transcripts, and the interviewee was able to offer edits to transcription, if needed. This allowed for potential ethnographic, researcher bias. Furthermore, all documented data collected has been stored on a secure computer, and any handwritten notes are maintained in a locked cabinet. Additionally, to protect the confidentiality, no direct quotes related to the institution’s accreditation are mentioned.Limitations and delimitations. The potential disadvantages of the interview process include (Creswell, 2002):Data are filtered by a researcher who might be biased.An interviewee might lead a researcher in a false direction.The presence of a researcher might alter the comfort or freedom of an interviewee, thereby skewing the results.These disadvantages were addressed by the researcher who explained to each interviewee that there might be a possible bias because of the researcher’s intimate knowledge and association to accountability and accreditation. To address any erroneous directions of meaning taken by the researcher, interviewees were asked to review their transcribed dialogue to allow for a direct opinion about the accuracy of the content and to ensure intent was captured adequately. Interviewees were reminded of their rights as human subjects highlighting the personal nature of the confidential interview. Furthermore, Hesse-Biber and Leavy (2011) tied the moral integrity of the researcher to the validity and trustworthiness of the research. Therefore, throughout the research process, this researcher followed ethical codes of conduct utilizing the implementation process shown in Table 2.Table 2.The Sequence of Activities for the Interview ProcessSequenceAction Purpose1. Contact the provost of the institution. To ask for permission to conduct research and outline the research process. 2. Define potential participants.To establish clear expectations of institutional participation. 3.Create semi-structured interview questions and review with two expert panelists.To establish valid questions that will capture and guide the interview process.4. Obtain participant information and enter information into a data base.To establish contact information for participants. 5. Establish dates for potential interviews.To frame the interview process.6. Send initial contact letter requesting participation and outlining IRB requirements. To encourage participation and ensure the amenity and rights of participants.7. Contact each participant. To build rapport, answer questions and confirm the interview time. 8. Send follow-up e-mail clarifying the interview time, purpose, and process. To provide information, encourage participation, and illustrate the value. 9. Conduct the interview, transcribe the data, and read all material.To gather data and initiate coding. 10. Send a handwritten note of appreciation to each participant after the interview.To thank participants.Documents. To present supplementary insight into how leadership fosters a culture of evidence, selected internal and external documents were reviewed. Relevant documents provided a supporting text-based source of essential information and contributed to the painting of a picture of how an institutional community builds a culture of evidence (Creswell, 1998). Depending on relevance, documents included, but were not limited to, the following sources: senate minutes, the board of trustee minutes, academic assessment committee minutes, institutional assessment committee minutes, communication from the office of the president, accreditation documents, budget information, and written communication.All documents were coded for common terms representing concepts, issues, ideas, and others associated with a culture of evidence. Document review spanned a time frame from when the institution was first required to respond to accreditation deficiencies to present.Data analysis: coding. Ethnographers present the description, themes, and interpretation within the context or setting of the culture-sharing group (Creswell, 2012). According to Hatch (2002), “data analysis is a systematic search for meaning.” Organization of the data took place following the transcription of each interview. In this ethnographic study, the researcher looked for shared patterns of behavior, beliefs, and language to code (Creswell, 2012). Behavior refers to the action taken by a person or groups of people within a cultural setting. How those individuals or groups think or perceive concepts within a cultural setting is a belief. Finally, communication exchange between and among the individuals and groups within the cultural setting is the language pattern to be coded.The process referred to as coding is an ethnographic tool used to place data from interviews, observations, and documents into like categories (Hatch, 2002). During the coding process, the researcher identified and combined multiple sources of the raw data to answer the specific research questions (Wolcott, 1994).The first step used in the analysis was open coding (Creswell, 1998) where words or phrases from the transcriptions were placed into categories. Like phrases or concepts emerged and themes became evident. That information was grouped and labeled (Appendix B and C). New categories surfaced during the second read of the data. A reread allowed the researcher to construct enhanced associations of information and themes. This is known as axial coding (Creswell, 1998). Axial coding allowed the direction for analysis to emerge from the data as opposed to creating categories from theory or research objectives because it was completed topologically (Hatch, 2002). Using open and axial coding is the process the researcher used to examine causation, outcomes, or effects and enable meaning to be made from the data (Creswell, 1998). Finally, selective coding was completed, which interrelated the categories and developed the case study story (Creswell, 2012). Within the final story, there are ideas for further research.Verification. Qualitative studies use verification to determine whether the data and findings from the research are trustworthy and validate a researcher’s understanding of the data (Creswell, 1998). The goal of a qualitative study is to provide data from interviewees that are clear, concise, and understandable. Member checking is a process in which the researcher verifies the findings with study participants. This process can include verifying accuracy of themes and interpretations (Creswell, 2012). The following verification procedures were used to achieve the desired goal: deep descriptions, triangulation, member checks, and reporting all potential and existing personal biases.First, to depict an adequate image of the data using substantive and germane information from the interviews, fine points were provided, including appropriate narratives with individual perspectives. This lent greater credibility to the data interpretation. Second, using multiple data collection documents and interviewees allowed for triangulation of the data; providing for differing perspectives for confirming and supporting the findings (Guion, 2002). While the coding was in process and a review of the transcripts from diverse interviewees, triangulation of the data occurred, lending greater credence to the identified perspectives.The role of the researcher and personal bias. In an ethnographic case study, the researcher has the role to initiate the exploration, verify all appropriate safeguards are followed, recognize all personal beliefs and experiences that might influence data interpretation, record the date correctly, and utilize structure to analyze and then present themes (Creswell, 1998; Merriman, 1998). In many ways, personal bias could support and enhance the analysis of the phenomenon studied. The researcher acknowledged her insider relationship to higher education and accreditation as both a strength and challenge to this study and the research.As identified, an insider research approach was used to capture the qualitative data for this study. Brannick and Coghlin (2007) equated insider research to self-ethnography and indicated Alvesson’s (2003) description. Alvesson (2003) categorized self-ethnography as a study in which the researcher-author describes a cultural setting to which he or she has “natural access” and is an active participant, more or less on equal terms with other participants. The researcher then works and/or lives in the setting and then uses experiences, knowledge, and access to empirical material for research purposes. Coghlin (2007) stated insider researchers have pre-understanding and access and want the choice to remain a member when the research is completed. Because insider researchers have a personal stake and substantive investments in the setting, typically, the research is perceived as problematic and often disqualified for not conforming to standards of intellectual rigor (Alvesson, 2003; Anderson & Herr, 1999; Anderson, Herr, & Nihlen, 1994). Brannick and Coghlin (2007) challenged this viewpoint and concluded in their Case for Insider Academic Research that in whatever research tradition is undertaken, insider research is not only valid and useful but also provides valuable knowledge about what organizations are really like and which traditional approaches may not be able to be uncovered. In their view, insider research is not problematic in itself and is proper research in whatever paradigm it is undertaken (Brannick & Coghlin, 2007).Reflexivity in ethnography refers to the researcher being aware of and openly discussing his or her role in the study in a way that honors and respects the site and participants (Creswell, 2012). Being reflexive can lead to conclusions in the study that are tentative or inconclusive, leading to new questions to answer (Creswell, 2012). Fetterman (1998) indicated a need for central ethical codes for ethnographers: do no harm to people or the community under study, respect for the rights of the people, for the integrity of the data, and for people’s way of life. The researcher did not impose superiority over the people under study, and the researcher understood that the role was not to judge, but to learn. As an insider, the researcher was aware of the strengths and limits of the pre-understandings so the researcher could use her experience and theoretical knowledge to reframe the understandings of the study. The researcher was cognizant of the demands that both roles, that of the organization and as the researcher, made on her. This supported the researcher’s commitment to maintaining ethical research standards.Chapter 4FindingsApplication of distributed leadership to advance a culture of evidence. The BOT is the means by which authority and responsibility are assigned, delegated, and shared at the institution. Policy and decision-making are vetted in the senate. With these two components, there is a pattern of a distributed collegial governance and leadership structure to carry out the institution’s goals and objectives surrounding a culture of evidence.Figure 1. Leadership organizational chart highlighting individuals leading a culture of evidence. Note: This figure illustrates where the distributed leaders are active within the institution.Two positions at the college deal directly with fostering a culture of evidence: the coordinator of accreditation and assessment; and, the director of institutional research. Each position supports the assessment committees in an ex-officio role. They also serve as assessment and data resources for the entire institution.The document review identified two committees, the Academic Assessment Committee (AAC) and the Institutional Assessment Committee (IAC). Each committee reports to the college’s senate, an entity that includes membership from the faculty, non-teaching professionals, and staff. Administration, adjuncts, and students are not voting members of the college senate, but individuals from those groups serve on committees of the senate.The charges of the two committees, as identified by the College Senate Articles of Governance (2014), are articulated below:AACto review and make recommendations to the college’s academic outcomes assessment activities,to benchmark other colleges’ academic assessment activities in order to adapt and adopt best practices, andto monitor changes in academic assessment standards from accreditation agencies and recommend changes to the college’s policies and practices.IACto review and make recommendations to the college’s institutional assessment activities,to monitor institutional assessment activities and make recommendations for improvement,to benchmark other colleges’ institutional assessment activities in order to adapt and adopt best practices, andto monitor changes in institutional assessment standards from accreditation agencies and recommend changes to the college’s policies and practices.These charge statements noticeably align with establishing a culture of evidence for the institution. Therefore, identifying the membership apportionment of these committees further supports a distributed leadership model.The distributed leaders. Table 3, Table 4, and Figure 2 indicate the committee membership by the type of role the individuals serve at the college.Table 3.Distribution of Individuals on the AAC and IAC2010-20112011-20122012-20132014-20152014-2015AAC faculty88888IAC faculty22222AAC NTP22122IAC NTP22222AAC staff00000IAC staff11111AAC admin12212IAC admin33333AAC adjunct00100IAC adjunct00000AAC student02220IAC student03320AAC total1114141312IAC total81111108Non-teaching professionals (NTP) equate to non-teaching faculty who are identified as faculty, belong to the institution’s faculty union, and are tenured or tenure-track employees.Staff are individuals who belong to the clerical/administrative assistant union.Administrators (Admin) are individuals who hold contract positions and do not belong to a union. Admin does not include members of the President’s Leadership Team (cabinet). Admin serve in a director or dean role.Review of the data above indicates the leaders serving on these committees do serve in different capacities on campus, and they are distributed throughout the campus in various positions. Thus, through their charge as assessment committee members, they nurture a culture of evidence in a distributed leadership model.Table 4.Aggregate Distribution by Percent of AAC and IAC Committee Members by Year and Institutional RoleFacultyNTPStaffAdminAdjunctStudents2010-201153%21%5%21%002011-201240%16%4%20%020%2012-201340%12%4%20%4%20%2013-201444%17%5%17%017%2014-201550%20%5%25%00Figure 2. Aggregate distribution by percent of AAC and IAC committee members by year and institutional role. Note: This figure demonstrates the distributed leadership groups represented on the assessment committees.The two assessment committees were identified as leading the culture by each of the different types of interviewees who acknowledged it was through a collaborative effort that the culture of evidence had evolved. The respondents’ following statements identified strong attributes of the distributed leadership model:The committees, I actually don’t know when the first assessment committee was established, but certainly helped with the development of the templates and just taking a broader look at an assessment or across the entire institution. So there were many years of progress along the way to address those issues that accreditation had identified as important. So leaders were probably, I see it as collective, I see administrators meeting with faculty members saying ‘Hey, I mean jointly we need to figure this out.’ And I’ve seen the course of the last couple of years as a partnership. I don’t see it as administration saying well these are the ways we are going to assess the learning, and these are the ways we are going to assess the institution at large. I see that it (building a culture of evidence) came out of negotiation and faculty input, administrator input, and to a lesser note some student input (personal communication, Administrator 1, 2015).So, while I think directives come because they have to, we have the accrediting agency, we have the administration (who) need data and the evidence to do what they have to do. The way in which it’s (building a culture of evidence) done has been led by the campus constituency, whether it’s a committee, whether it’s the senate policy. And it shows it because we have different players and that’s why it (the evidence process) keeps changing (personal communication, Administrator 2, 2015).When I became division chair, I had to look at the process, and learn the process, and that’s how I really, I think, learned about it. And then, I joined the assessment committee, and then, of course, you know, I learned a lot more about it (personal communication, Faculty 3, 2015).From my perspective, it’s (building a culture of evidence) a committee thing, which means it (the directive to form a committee) comes from the leadership, whatever that is, if it’s administration, whether it’s the (accreditation) team, whether it’s the senate (personal communication, NTP 1, 2015).Review of the institution’s document archives supports the interviewees’ observations and statements.Distributed leadership advancing the institution’s culture of evidence. Review of the archived committee minutes provided strong documentation indicating a clear progression toward a culture of evidence as found in the AAC and IAC committee minutes and actions. Table 5 indicates key notable actions and progression toward a mature culture of evidence as identified through the review of archival documents 2010-2015.As noted, Schein argued that culturally embedding mechanisms, primary and secondary, have an effect on culture to the extent that they are found to be useful by the organization in coping with the world in which it functions (Schnieder, Ehrhart, & Macy, 2013). To be successful in meeting evidence expectations within the function of higher education, the observations can be categorized into Schein’s culturally-embedded mechanisms, primary or secondary. Table 7 identifies the researcher’s observations, 2014-2015 and the alignment with a primary or secondary mechanism as identified by Schein, as well as Dwyer’s (2006) definition of a culture of evidence.Table 5.Documented Actions and Decisions vs. a Culture of EvidenceInstitution’s documented actions and decisions Culture of EvidenceUse direct, valid, and reliable measuresA systemic, data-driven, comprehensive approachTo understand the quality of its institutionDevelop a documented assessment gathering processxxDevelop and implement templates to document evidencexxIdentify evidence collection timelinexxRotation of leaders on committeesxProvide documents and template to campus communityxIdentify additional data and evidence needed to meet compliance expectationsxxAssess the work of committee and suggest changes to the charge and membershipxExamine evidence processes and timelinesxxxProvide professional development for campus about capturing evidencexAdministration focuses on accreditation compliance – lack ofxProvided monthly common meeting time for leaders to work as committeexxState also driving the culture and evidence expectationsxDocumented evidence and assessment processes vetted through shared governancexxAdjustment of processes to align with accreditation timelinexxUtilize a home-grown documentation process – no outside assessment productxxInternal professional development about evidence and best practices planned by fac/staffxxGather feedback from campus about evidence processesxxIdentify gaps in processes and make adjustments to collection of evidencexxCommunicate assessment results to Board of Trustees and campus communityxIdentify external needs of attending accreditation conferences/workshopsxxTargeted evidence processes developedxxxAdministration focused on professional development and documentationxFacility and staff attend regional trainingxxFurther develop processes following trainingxxShare new knowledge with campus via multiple professional development sessions for campusxTable 5 ContinuedAdministration consulted for clarification of accreditation expectationsxMonitor and peer review of documented evidencexxIdentify need to further develop a culture of evidence by dedicating resources for a positionxxxEstablish recognition opportunities related to a culture of evidence xxDedicate campus-wide professional development days to assessment and best practicesxxxNote: Table from Dwyer et al. (2006).These documented culture of evidence actions also align with the leadership actions identified by regional accrediting body MSCHE (2008), as adopted by Suskie, which advance a culture of evidence.Table 6.Documented Actions and Decisions vs. Leadership Actions that Advance a Culture of Evidence (Suskie)Institution’s documentedactions and decisionsMSCHE, Suskie (2008)Leadership actions that advance a culture of evidenceShow a personal commitment to assessment and a culture of evidenceStimulate interest in assessment and a culture of evidenceRecruit people that support a culture of evidenceReward and incentivize those engaged in assessment and a culture of evidenceValue professional development to strengthen the culture of evidenceAllocate resources that support a culture of evidenceSupply time for faculty/staff to engage and complete culture of evidence initiativesMonitor measurable outcomes that relate to a culture of evidenceCelebrate, reward, and recognize achievements that support a culture of evidenceDevelop a documented assessment gathering processxDevelop and implement templates to document evidencexIdentify evidence collection timelinexRotation of leaders on committeesxxxxProvide documents and template to campus communityxIdentify additional data and evidence needed to meet compliance expectationsxAssess the work of committee and suggest changes to the charge and membershipxExamine evidence processes and timelinesxxProvide professional development for campus about capturing evidencexxAdministration focuses on accreditation compliance – lack ofxProvided monthly common meeting time for leaders to work as committeexxxState also driving the culture and evidence expectationsxDocumented evidence and assessment processes vetted through shared governancexxAdjustment of processes to align with accreditation timelinexxUtilize a home-grown documentation process – no outside assessment productxxInternal professional development about evidence and best practices planned by fac/staffxxxGather feedback from campus about evidence processesxxxIdentify gaps in processes and make adjustments to collection of evidencexxCommunicate assessment results to Board of Trustees and campus communityxxIdentify external needs of attending accreditation conferences/workshopsxxTable 6 ContinuedTargeted evidence processes developedxAdministration focused on professional development and documentationxxxFaculty and staff attend regional trainingxxxFurther develop processes following trainingxShare new knowledge with campus via multiple professional development sessions for campusxxxxAdministration consulted for clarification of accreditation expectationsxxMonitor and peer review of documented evidencexIdentify need to further develop a culture of evidence by dedicating resources for a positionxxxEstablish recognition opportunities related to a culture of evidence xxDedicate campus-wide professional development days to assessment and best practicesxxxxxxxAlign assessment data to budget requests and allocationsxxxTotal attributes identified612517109107Table 7.Observations by Researcher v. Culture Mechanisms (Schein) and Culture of Evidence 2014-2015Culture Mechanisms(Schein, 2010)Culture of EvidenceObservationPrimarySecondaryUse direct, valid, and reliable measuresA systemic, data-driven, comprehensive approachTo understand the quality of its institutionLeaders require documentation of evidence occurring in all areas of campusxxLeaders acknowledge quality of documentation varies across campusxxxCommittee structure and charge support distributed leaders and culture of evidencexxDue to structure, leadership of the evidence processes occurs in various silos and at different levelsxxxxAAC and IAC charged with developing processes to capture evidencexxxxTable 7 ContinuedPeer review of evidence quality is a struggle due to a consistent definition of “evaluation” and contract languagexxPeers are cautious in providing feedback on documentation to colleagues - contractxxDocumentation takes place across the campus (e.g., BOT, leadership, divisions, ILO, PLO, and CLO)xxxFaculty and staff seek guidance from leaders on how to improve own collection of evidencexxLeaders plan and facilitate professional development for best practicesxxLeaders of a culture of evidence are from various areas of the campusxxLeaders are from all institutional role categories; adjunct and student, not involvedxxxDue to part-time nature of adjuncts, it is hard to engage that populationxxAdjuncts are weakest at completing documentation processes, not engaged with campusxxStudents do not fully understand the purpose for a culture of evidence, disengage quickly on AAC/IACxInstitution relatively small, full-time employee serve in many capacities to build processesxxFull-time employees lead and own the culture of evidence processesxxxxLeaders established common time for discussion about evidencexxxTop leaders (VPs, Pres) articulate value of evidence, do not engage in building processes for evidence gatheringxxTop leaders (VPs) follow evidence processes and timelines established by AAC/IACxxxxShifts in gathering evidence for accreditation to gathering evidence for improvementsxxxAlignment of evidence is share with campus – annual reports, web page, end of yearxxxOwnership and development of processes developed organicallyxxxThe secondary mechanisms, or behind-the-scenes elements, to support a culture of evidence that meets higher education expectation is a bit more evident as noted in the observations. This could explain why interviewees articulated the belief that the culture of evidence at this institution can be classified as organic, homegrown, or a grass-roots effort. As a secondary element, this result was not part of the initial inquiry.The leadership is stronger here because I think that it’s easier to reach all the people you need to get on board. As opposed to a larger institution where they might have a team, institutionally search team or a person that puts together reports and then maybe they take a small sample of different areas on campus where they need to gather data from. And then everyone else continues on with their everyday lives and may not directly experience or incorporate assessment into the way they do their jobs. I’m not saying that’s the way it is at all large institutions, but here I think that it’s so small, and that has helped with the inclusiveness and leadership I think has been effective because well everybody knows each other. So in my time here it does seem as though it has been organic. Here everyone’s involved, and I think that someone says you have some weakness’ here that are going to affect the entire institution people take that personally, and I want to contribute to solving the problems. Instead of saying, “Oh well, that’s not me.” Because I think people here are invested in being a part of the culture, and if you’re invested in being a part of the culture you’re going to participate in helping to solve whatever problems are identified. (personal communication, Administration 1, 2015)I was kind of doing it (documentation) all in my head, you know, which is how I personally tend to do it. So, there wasn’t (documented) evidence, and nor was I really asked for any for a while. And then things, kind of I think grew, the term organically, you know, I sort of like, I don’t know if I fully agree with organically because (organic is) a natural thing where some of the stuff was not necessarily natural, it still got done, but, it seemed to me to be a growing thing definitely. So in that sense, I think organically. I like the organic way it’s kind of grown here, in a lot of ways because I think it has. I think we have taken ownership over it a little bit better than maybe some other places have. (personal communication, Faculty 2, 2015)Yeah, we seem to be doing that a lot more. And I think that’s not only because we were told we had to, but it is the way we do it, it is very organic. Everybody is looking at their own departments and divisions. They are looking at their own literature. They’re looking at their students and thinking in terms of closing the loop, of documenting it, of acting on what they found. I think because it’s been a conversation on campus, people think about it more often than not. … I think a lot of what we do is very organic. It’s seems to be the way our way culture works. We don’t have a dictator. I don’t think we have a dictator. I think that it is very clear we have obligations, and they’re made clear. But I think the way in which we do them is very from the bottom up, and because we have more guidance now, it’s more balanced than it’s ever been. (personal communication, NTP 1, 2015)When we really kind of first started assessment about a year after (the accreditation warning,) that’s when we (as an institution) kind of understood what we had to do, that’s when I started doing assessment. But it was hit or miss because there was really nobody here to help us or guide us or tell us what to do. So it (the expectation) just kind of came I just learned on my own. You know we had nothing. The forms we use, we developed ourselves, you know, and we just tried to make it as easy as possible for everybody. (personal communication, NTP 2, 2015)Oh, definitely (organic). Definitely. I think we’ve tried to figure it out as we go along, and I mean that in a good way, but it’s definitely been a homegrown process. Changes have been made over the years for a variety of reasons. You learn more, say from the accreditor. You bring something back from a conference or from a response from the accreditor from something you submitted to them. We find that something was just too cumbersome to work with or maybe it wasn’t bringing in the right summary information or maybe it wasn’t clear to people how to go about doing it, so we’ve made a lot of changes over time, and each one of them has been basically a homegrown alternative to the prior and in that way it’s been very organic, very institutional centric with a very concerted attempt to make something that’s simple and easily repeatable that meets the needs that we have. (personal communication, Administration 3, 2015)Well, the order (to document evidence), if you will, came from the administration that this is something that we have to do. We have to get in compliance, and I believe it was am accreditation issue, and we were put on probation, so it wasn’t like it was something the administration could ignore because it would cause more problems as an institution if we didn’t address it. So I think that-that their (top leadership) role was “This is what has to be done. You all can figure out how to do it. But we need to have it done by such and such a period.” So they (top leadership) gave us (campus) a lot of flexibility and latitude in what we wanted to use as our assessment tools, which is good and bad. Some people here just want it to be . . . wanted to be in the process. Tell me what the process is and I will be happy to comply, where I’m sure there are other constituencies around here that want to build it themselves. . . . From an administration (top leader) standpoint, I feel like it was you know this is what we have to do. We (as a campus) have to do it (provide evidence), so we are compliant with the accreditors, and we’ll let you figure out how you want to assess yourself. (personal communication, Faculty 3, 2015)The distributed leaders at this organization have addressed and facilitated the building of this culture of evidence as an organic or institution-centric process. These same observations can be aligned with Copland’s themes as noted in Table?8.Table 8.The Culture of Evidence Observations by Researcher vs. Distributed Leader Themes (Copland) 2014-2015ObservationDistributed Leader Themes(Copland, 2010)CollaborationTrustProfessional learningReciprocal accountabilityConsensus important problemsExpertiseLeaders require documentation of evidence occurring in all areas of campusxxLeaders acknowledge quality of documentation varies across campusxxCommittee structure and charge support distributed leaders and culture of evidencexxxxDue to structure, leadership of the evidence processes occurs in various silos and at different levelsxAAC and IAC charged with developing processes to capture evidencexxxxPeer review of evidence quality is a struggle due to a consistent definition of “evaluation” and contract languagexPeers are cautious in providing feedback on documentation to colleagues - contractxxDocumentation takes place across the campus (e.g., BOT, leadership, divisions, ILO, PLO, CLO)xFaculty and staff seek guidance from leaders on how to improve own collection of evidencexxLeaders plan and facilitate professional development for best practicesxxxLeaders of a culture of evidence are from various areas of the campusxxxxLeaders are from all institutional role categories; adjunct and student, not involvedxxxDue to part-time nature of adjuncts, it is hard to engage that populationxAdjuncts are weakest at completing documentation processes, not engaged with campusxxStudents do not fully understand the purpose of a culture of evidence, disengage quickly on AAC/IACxInstitution relatively small, full-time employee serve in many capacities to build processesxxxxFull-time employees lead and own the culture of evidence processesxxxxLeaders established common time for discussion about evidencexxxTop leaders (VPs, Pres) articulate value of evidence, do not engage in building processes for evidence gatheringxxxTop leaders (VPs) follow evidence processes and timelines established by AAC/IACxxxxShifts in gathering evidence for accreditation to gathering evidence for improvementsxxAlignment of evidence is share with campus – annual reports, web page, end of yearxxxxOwnership and development of processes developed organicallyxxxxTotal159712155Attributes of distributed leaders who nurture a culture of evidence. Higher education leaders are vital elements in meeting the demands of evidence-based accountability (Maki, 2006; Middaugh, 2012; Suskie, 2014). To promote a culture of evidence, distributed leaders, individuals at all levels of the organization practicing distributed leadership as defended by Suskie, need to “demonstrate the value of assessment, develop learning organizations, build relationships, instill trust, develop an organizational mindset for growth,?incentivize?the process, show compassion, reverence the whole being.” Several of these attributes were identified by the interviewees as noted below:I say that the ability to listen, to be open, but also to be able to identify gaps. So we could have a very strong conversation about when to engage but if something is missing or what about this. So it’s okay to guide and participate in the process but also not be steamroller, shared decision-making (personal communication, Administrator 1, 2015).I think that the person who heads it in your area has to be approachable. I’m not sure 100% that the people I chose are 100% approachable, but I think mostly so. And I think because I’m given the assignment that you have to do this they rally around me a little bit more and say okay this is what we have to do. So qualities, I would say just organized, coordinated, approachable, educated about what assessment is I would say is very important (personal communication, Administrator 2, 2015).It is very driven by personality. We have some very strong personalities in terms of people in charge (of leading the culture of evidence). (A leader) tries to figure out what’s working and what isn’t (personal communication, NTP 1, 2015).I think they (leaders) are generally people who have been around here long enough to know the history of where we were, especially around the last accreditation visit. So they, I think they understand the importance of it because of where we once were, and the fact that we don’t want to be there again. I think those folks have a sense of urgency for it and have communicated that to the rest of us. So I think there’s history there. I think the folks with history have taken on some leadership, which is good, because the rest of us don’t know, or don’t care. I think that they are generally folks that people tend to look at as leaders overall, in general, because they are leaders in other things too. You know, whether it’s their division, or whether it’s a program or something that they own, or just by virtue of a strong voice. Which is another factor, as I think they’re people who tend to be very vocal. And they are really only a few on this campus, who are the most vocal people, on the campus. Those are the people that also tend to support a (culture of evidence). And I think most of them that I can think of are good at and are very good in terms of arguing with people in kind of a constructive way. You know, not telling other people what we want to hear, but being able to, you know, and at the same time disagree, and present an unpopular opinion. I think those are the kinds of people, and I see that on the committee, you know, we have a lot of debates, you know, should we do this? Should we do that? And a lot of them are, you know, about little things. But I think . . . . but it’s good because I think we are really treating it critically. You know . . . So, I think they have the ability to be critical without completely alienating people. Although, I guess there is some of that too, sometimes. But, I think that’s the most important people skill so that they can, you know, they can intellectually disagree in a, you know, sort of reasonable way (personal communication, Faculty 2, 2015).I think somebody has to have a full understanding of all the working parts, not just the academic side or not just the (non-instructional) side, but how it all works together. And you know, for being such a small college, I don’t know why we have such a communication problem here. We don’t know what the other hand is doing sometimes, so I think a leader would kind of pull it all together for us and you know give us some direction. I think the leaders need to show us the benefits (of engaging in a culture of evidence). (leaders) tell them we’re doing good you know because you got to keep encouraging or else people are going to lose faith in what we’re doing (personal communication, NTP 2, 2015).Well, I think the one characteristic that you have to have is you need to believe in it (culture of evidence). There is no way you can fake it. There is no way you can get up there and “rah rah rah” when in reality you don’t believe in the process. So, in my mind, that is absolutely critical (personal communication, Faculty 3, 2015).(Leaders are) very passionate about this (building a culture of evidence). They see the value and/or a need for it, and they want to be involved. They want it to work well, and they really buy into the fact that it also improves what we do, and it’s not so hard to do. It makes a lot of sense, and they really buy into the idea. They’re very passionate about wanting it to succeed. Oh, definitely, I think communication skills. I think if you ask twenty people who’ve been involved in assessment on campus to describe the process, what’s needed, why it’s done, you’re going to get twenty somewhat different answers. Part of that is because it’s been a foreign language to us and so we’re all trying to learn the foreign language. We’re all in a different spot of our own competency and our own desire to know it and all the rest of it and so you really need a good teacher, a good communicator, someone who can communicate what is it we’re supposed to be doing. How in practical ways can it be effective and help the person who’s doing it rather than being yet one more task I have to do and it takes more of my time? I think you also need someone just who is very organized. You need that logical mind that organizational structure in their mind of knowing how things should flow, knowing what sort of format works well and is easy to work with and what isn’t. Knowing when you read a report just from the first thirty, sixty seconds of reading it, does it logically make sense? Is it hitting the mark or is it really missing things? You know, you have to have all those organizational pieces sorted out in your mind and be able to be that sort of person when you format something or read something or communicate something because logical mind that organizational structure in their mind of knowing how things should flow and it should make sense pretty immediately (personal communication, Administrator 3, 2015).From the interview excerpts above, Table 9 shows the attributes identified for the campus’ distributed leaders engaged in fostering a culture of evidence.Table 9.The culture of Evidence-Distributed Leader Attributes as Identified from InterviewsCulture of Evidence-Distributed Leader Attributesability to listenintellectually disagreeto be openfull understanding of all the working partsable to identify gapscommunicationshared decision-makingbelieve in the processbe approachablesee the value and/or a need for itorganized,want to be involvedcoordinatedcommunication skillsapproachablebe a good teachereducated about assessmentvery organizedtries to figure out what is working and what is nottend to be very vocalknow the historysupport a (culture of evidence)understand the importancepresent an unpopular opinionbuy into the fact that it also improves what we do sense of urgencyare very good in terms of arguing with people in kind of a constructive way.ability to be critical without completely alienating peoplepassionate about this (building a culture of evidenceDistributive leadership themes identified with promoting a culture of evidence. Utilizing the data collected in the previous section on attributes (i.e., Table 9), an alignment was reviewed to identify distributive leadership themes that promoted a culture of evidence. As noted, Copland (2003) identified themes of leaders and organizations that create and implement a successfully distributed leadership model. They include:the development of a culture within the institution that embodies collaboration, trust, professional learning, and reciprocal accountability;strong consensus regarding the significant problems facing the organization; anda need for rich expertise with approaches to improving the institution.Table 10.Distributed Leader Attributes Identified by Interviews v. Distributed Leadership Themes (Copland)Distributed Leader AttributesThemes (Copland, 2010)CollaborationTrustProfessional learningReciprocal accountabilityConsensus important problemsExpertiseability to listenxto be openxxable to identify gapsxshared decision-makingxbe approachablexorganizedxcoordinatedxapproachablexxeducated about assessmentxtries to figure out what’s working and what isn’txknow the historyxxunderstand the importancexxsense of urgencyxxtend to be very vocalxsupport a (culture of evidence)xare very good in terms of arguing with people in kind of a constructive way.xpresent an unpopular opinionxxxability to be critical without completely alienating peoplexxxintellectually disagreexfull understanding of all the working partsxcommunicationxbelieve in the processxpassionate about (building a culture of evidence)xxxsee the value and/or a need xxwant to be involvedxbuy into the fact that it also improves what we doxcommunication skillsxbe a good teacherxxvery organizedxlogical mind that organizational structure in their mind of knowing how things should flowxTotal13564510Note: Copland (2010)As evinced in Table 10, the interviewees captured and articulated many of the attributes in their conversations centered on characteristics of the institution’s leaders and their relationship in fostering a culture of evidence. Through alignment to the themes, this further supports how this campus has a successfully distributed leadership model that supports a culture of evidence.Effectiveness of the distributed leadership model and meeting culture of evidence expectations. The effectiveness of a distributed leadership model to meet culture of evidence expectations was measured indirectly through a review of archival accreditation documents during the period of 2006-2015. The following accreditation highlights indicate the improvement and nurturing of a culture of evidence:2011: A monitoring report was requested to include documented evidence that the institutional effectiveness plan had linkages between institutional assessment and academic assessment, assessment results are shared with stakeholders, and assessment results are used for continuous improvement.2012: The monitoring report was accepted; however it provided limited responses to requested information. A second monitoring report was requested that documented further evidence of the institutional effectiveness plan linking institutional assessment and academic assessment, assessment results are shared with stakeholders, and assessment results are used for continuous improvement.2013: A monitoring report was requested to include further documentation and implementation of an institutional effectiveness process, and to provide evidence of a process for program level student learning outcomes. 2014: The decennial report due in 2015-16 needs to provide documentation of unit level assessments linked to the strategic plan, and to show how institutional learning outcomes are assessed across all programs.2016: Accreditation was reaffirmed. Notable is the institution’s progress to date on meeting a culture of evidence. A progress report is requested that documents the assessment of the new strategic planning initiatives and institutional effectiveness, and student learning outcomes for student service programs.An analysis of the accrediting body’s reports yields the following observations:broadly stated recommendations in years 2011 and 2012;recommendations narrow in focus in 2013 as the AAC and IAC refine processes and foster an emerging culture of evidence system;2014 recommendations specify pointed topics. The AAC and IAC are establishing a stronger culture of evidence; andreaffirmed accreditation, yet drilled into specific plans and departments in 2016.Figure 3 and Figure 4 provide visual analyses of the culture of evidence developing as the accreditation recommendations become more specific in nature.Figure 3. Change in the college’s culture of evidence progression.Note: This figure illustrates how the culture of evidence has evolved at the institution from 2012-2015.Figure 4. Change in the accreditor’s recommendations foci.Note: This figure illustrates how the accreditation body has requested a culture of evidence reports from 2011-2016.While the relationship between the two, a developing culture of evidence and accreditation recommendations, can only be indirectly linked, it is fair to state the distributed leadership model is effective in elevating a culture of evidence to meet accreditation expectations. This supports the change Astin and Astin (2000) utilized in their definition of leadership as a process carried out by institutional leaders that is ultimately concerned with fostering change.General observations by the researcher through direct work on the self-study process for this institution in 2015-2016 have also identified that the distributed leadership model has changed and further refined the culture of evidence on this campus. Self-study statements to support this observation include the following:Institutional assessment and the assessment of student learning were the responsibility of all employees of the college.The AAC/IAC were responsible for coordinating assessment activities related to a culture of evidence.The AAC/IAC met monthly and included representation from administration, faculty, staff, and students, demonstrating a collaborative effort on campus to support a culture of evidence.The AAC/IAC guided and communicated the institution’s work surrounding a culture of evidence offering professional development on the evidence-based processes and assessing the effectiveness of the culture of evidence.Each of these observations from the self-study supported an effective culture of evidence as identified by Dwyer et al. (2006) to use direct, valid, and reliable measures that were systematically collected and used to understand the quality of the institution. Additionally, these statements articulated a distributed leadership structure was utilized to foster a culture of evidence. Again, one can see in this case study that the distributive leadership structure supports the statement of Bolman and Gallos (2011) that no one person or group controls a higher education institution.The final regional accreditation report, May 2016, validated these statements and observations related to a distribution leadership model and its relation to a culture of evidence. The self-study team declared the institution as meeting all of the accreditation standards. The following excerpts from the external reviewers’ final report further supported a distributive leadership structure as a way to develop and promote a culture of evidence that met external accountability expectations.Summary of evidence and findings (Accreditation self-study team, 2016):Culture of EvidenceThere was a process in place for periodic review of the mission and strategic plan by internal and external stakeholders.The goals outlined in the strategic plan were consistent with the mission. Clear and measurable objectives have been identified in all of the sub-plans to guide the planning and budgeting process at the college.College goals focused on student learning and excellence in teaching, civility, and integrity. Institutional improvement was addressed through ongoing assessments.The college was committed to ongoing assessment, ensuring that priorities and initiatives were aligned with the mission, goals, and core values.The planning framework was expressed in an overarching strategic plan. This document provided the framework for all assessment activities, both for institutional effectiveness and for learning outcomes.Documented sub-plans were explicitly linked to the goals of the strategic plan and map activities and objectives that addressed the goals of the strategic plan. Goals were established annually, and progress toward meeting each goal or objective likewise was reported munication about the progress in achieving these goals and objectives was delivered to the campus in the president’s End of Year Memo, which lists each strategic goal and documents the activities that have moved the campus toward meeting each goal.The college should be commended for its participation in Assessment in Action: Academic Libraries and Student Success and its coordinated efforts with the Office of Retention, Residential Life Services, and Office of Institutional Research to understand the effects of personalized library services to at-risk students.The strategic planning process provided the essential administrative infrastructure and process required to determine, assess, and measure the effective use of resources to realize the institution’s strategic goals.Individual department budgets were determined through an annual request and review process and might include assessment item budget requests.The institution had systems in place to assess college readiness.The college had developed an organized and sustained process for gathering and assessing data related to the admission and retention of students. Each year, student affairs units chose one or more topics related to departmental goals and study the effectiveness of their process. The results of these assessments were shared with the division, and the findings were used to improve processes and guide resource allocations.The college had made significant progress in establishing a culture of assessment in the student services area.Both the board and senate conduct annual assessments to ensure that the two bodies were effective in supporting the mission of the college.Extensive data detailed the institution’s schedule of assessment of CLOs, PLOs, and ILOs. There was clear evidence that the institution was making an effort to determine if they were achieving the outcomes identified as the critical elements of their stated general education efforts.The college was committed to ongoing improvements and assessments to ensure the proper placement of students.A culture of assessment was clearly and persuasively evident at the college.The strength of the IEP rested in the fact that it included the most important aspects of institutional assessment process such as assessment measures (i.e., direct and indirect), person(s) responsible, schedule of assessment cycle, analysis, action, closing-the-loop, link to the FM budget, and sustaining the assessment process.There was documented evidence that unit assessment data had been used in finance and administration to improve the processes and services provided by the college, including the annual budget process. It appeared that assessment data collected in Student Services were focused on improving the operational services of the units rather than on aspects of student learning in this non-academic area.The college had clearly attended to the mandated responsibility of designing and implementing an IEP in June 2015, which was comprehensive enough to link academic and non-academic assessment plans to the college’s mission and to the strategic plan.Distributed LeadershipThe planning process allowed for input from all campus stakeholders.There was a well-defined system of collegial shared governance, including written policies that outlined governance responsibilities of both the administration and the faculty and staff.The work of administrative leaders was supported by a structure that allowed for adequate information and decision-making systems.The college faculty provided service and leadership in many areas of the institution. Faculty from each division were represented on the Curriculum Committee, which approved as well as reviewed the curriculum and related academic standards. Faculty also served on many college-wide committees and led numerous student enrichment experiences such as community service, experiential learning, clubs, and study abroad.The college’s Assessment Committee had the responsibility to guide and communicate the institution’s assessment work, to offer professional development on the college’s processes, and to assess the effectiveness of the institutional assessment process outlined in the IEP.The college’s Assessment Committee maintained and updated the evidence collection process and templates.Faculty had been genuinely interested in implementing evidence-based assessment processes and gathering viable data.Division chairs had been assigned the responsibility to coordinate the collection of general education assessment data from individual faculty and to disseminate the results.It had been a routine practice in the academic area to have faculty collect, present, and use data evidence; division chairs provided the direction for this effort.The faculty had been responsible for the courses in the respective programs and took leading roles in gathering and discussing assessment data to make necessary modifications to the program. The director of library and two additional library faculty who were liaisons to academic divisions had led the assessment of information literacy.The strength of the IEP rested in the fact that it included the main aspects of the institutional assessment process such as assessment measures (both direct and indirect), person(s) responsible, schedule of assessment cycle, analysis, action, closing-the-loop, link to the FM budget, and sustaining the assessment process.The IEP, referenced throughout the self-study reviewer’s comments, clearly identified the leaders on the campus who were responsible for implementing and fostering a culture of evidence. Returning to the culture of evidence definition by Dwyer et al. (2006), who stated it as the demonstrated ability of a higher education organization to use direct, valid, and reliable measures in a systemic, data-driven, comprehensive approach to understand the quality of its institution, one can find this explicitly articulated in the IEP. The distributed leaders responsible for meeting the culture of evidence expectations are also identified in the IEP.Excerpts from the IEPCulture of EvidenceThis IEP was designed and utilized by the institution to maintain a culture of quality. In doing so, the plan provided detailed guidelines and a timetable that ensured: Clear and important goals were established at all levels (i.e., institution, department, program, course) and goals were integrated with each other across the levels.Assessment used evidence that was readily available, was of reasonable quality, and was multi-dimensional.Active participation occurred for those with a stake in decisions stemming from the results.Assessment was used for the betterment of programs and services.Assessment results were available for timely decision making on a recurring basis.Results were communicated to stakeholders.The process was supported by the appropriate investment of institutional resources.The process was sufficiently simple to be sustainable.The process was periodically evaluated for both effectiveness and comprehensiveness.Distributed leaders identified in the IEP included:assessment committee;coordinator of institutional assessment and accreditation;program faculty;academic affairs dean;all faculty—full-time, adjuncts, concurrent enrollment high school teachers;general education faculty;board of trustees;college president;VP/provost, VP administration, VP student affairs;senate chair; andall units/departments in non-instructional areas.As noted by the self-study reviewers, the IEP captured and outlined the clear processes and systems in place to document a culture of evidence. It also identified the systematic approach to the collection of that evidence and who from the campus community would lead each of the evidence gathering processes.Chapter 5Discussion of the FindingsThis case study explored distributed leadership and its relation to establishing a culture of evidence at a two-year public institution. This study also sought to explore how organizational leaders in higher education create an evidence-based culture that meets accountability expectations raised by its stakeholders and the nation. Through this exploration, best practices and models of leadership practices to meet contemporary expectations surrounding a culture of evidence were identified, and the effectiveness of a distributive leadership model was evaluated by an external accreditation body. The following research questions framed the exploration of distributive leadership and its role in promoting a culture of evidence:How does this institution apply distributed leadership to advance a culture of evidence?Who are the institutional members involved and engaged as distributed leaders at this institution?What demonstrated behaviors and actions do these distributed leaders use to nurture the current culture of evidence at this institution?How effective is the distributive leadership model in meeting the expectations associated with a culture of evidence?What distributive leadership themes are identified with promoting a culture of evidence?Application of distributed leadership to advance a culture of evidence. Distributed leadership is clearly supported at this institution. The distributed leaders are established because of the structure of the institutional positions and the framework of the senate. The assessment committees have advanced the majority of the culture of evidence because of the length of their existence, six years. The senate bylaws set the foundation of distributed leadership through the identification of membership of the committees and charges of those committees. The BOT has identified and strategically created institutional positions that support distributed leadership that fosters a culture of evidence. These positions support the entire institution, yet, are categorically different. The coordinator of accreditation and assessment position is identified as a faculty (non-teaching) position, and the director of institutional research is an administrative position. This was intentional, as the newly created position of coordinator has access to faculty meetings that administrators do not. This allows for clearer communication and for the culture of evidence to be supported in various venues.Spillane’s (2006) three leadership responsibility arrangements have been evident at this institution. First, the distributed leaders at this institution have shared the responsibility of growing a culture of evidence and often overlapped functions. Second, several co-leaders have been performing similar leadership functions centered on a culture of evidence. Finally, parallel routines have been executed by various leaders, and the work has often been duplicated across the campus.The distributed leaders. The data provided in Chapter Four established the flexibility of a distributed leadership model as the percentages ebb and flow over the course of five years. As was apparent from the lack of mention during the interviews, adjuncts are notably absent from participating in the practice of nurturing a culture of evidence on this campus. Students have some degree of involvement and staff remain small in number, but are consistently represented across the years. Prominently identified in the data is how the faculty are richly involved in leading the processes; upon review of the archival documents, this group mostly served as the chairs of AAC and IAC committees.As denoted by Copland (2003), the hierarchical model of leadership has been abandoned in distributed leadership for a model that is focused on the goals of the group, rather than the actions of one. No one person controls the institution’s efforts to meet evidence-based expectations. Additionally, as noted by Wisniewski (2007), who stated, “higher education must develop a cadre of academic leaders who can engage the institution and its faculty/staff in change and transformation processes,” the evidence shows that this institution has created a distributed leadership model that is collaborative and embraces shared decision making to create a culture of evidence.Interviewees identified various people and committees from various roles on campus as leaders of the assessment and evidence movement. Among the interviewees, not one person or office identified as responsible for leading the evidence culture. However, the entities identified organizing and promoting a culture of evidence as clearly resting on the shoulders of different units and the assessment committees: the Academic Assessment Committee (AAC) and the Institutional Assessment Committee (IAC). The two assessment committees were identified as leading the culture by each of the different types of interviewees who acknowledged it was through a collaborative effort that the culture of evidence has evolved.Distributed leadership advancing the institution’s culture of evidence. The distributed leadership model at this institution has intentionally fostered a culture of evidence that has included a purposeful progression as evidenced by the repeated accreditation citations, 2011-2014, that have moved from a broadly focuses citation to a narrower focused citation (see figures 2 and 3). This purposeful progression is consistent with the notion that leadership is concerned with change, and it supports Astin and Astin’s view of a leader chiefly as a change agent (i.e., one who fosters change; 2000). As is the case in fostering a culture of evidence, leadership also implies intentionality, in the sense that the implied change is not random change for change’s sake but is rather directed toward some future end or condition that is desired or valued (Astin & Astin, 2000).Notable actions from Table 5 include the following. Both committees developed and implemented professional development days focused on strategies and best practices associated with a culture of evidence. For example, led by a faculty member, all academic programs met in a session to complete academic program curriculum mapping that identified the alignment of program learning outcomes to institutional learning outcomes. Concurrently, led by an NTP, non-instructional areas met to develop ways in which those areas could assess institutional learning outcomes. Finally, general education faculty came together to discuss and review assessment data at the course level and to develop action plans on how to improve the teaching and learning of the general education learning outcomes.From the data in Table 6, it is noted that at this institution a culture of evidence is not supported by incentives or rewards. The only link to incentives or rewards can be found in the alignment of assessment data to budget requests and allocations. Thus, the culture of assessment as a distributed leadership format could be identified as an intrinsic action by the faculty and staff or it could be an indication that faculty and staff at this institution view actions related to the culture of evidence as part of their job descriptions and not an additional responsibility that warrants incentives. Further research is recommended to identify how a culture of evidence can evolve and strengthen without incentivizing individuals to complete the documentation and processes. Important to note is that faculty and staff are recognized in other ways such as being asked to share best practices with their peers and having their assessments highlighted in the report to the board of trustees and campus community.Primary embedded mechanisms speak to the visible actions of the leaders. These include leaders’ actions, or role modeling, that is often emulated by people in the organization. Secondary mechanisms identified by Schein (2010) are considered to be methods that a leader may indirectly change in an organization’s culture. These include organizational structure, procedures, and systems. Additionally, secondary mechanisms can include grass-roots support to increase credibility and value. Finally, a secondary mechanism includes the formal statements of the organization’s philosophy, mission, and charter. These are the public face of the organization. Documented observations include additional support for the distributed leadership model advancing the culture of evidence.As indicated in the statements from the interviewees, the secondary mechanisms, those behind the scenes engagements that are indirectly altered by actions and often affect culture, were evident at this institution. The term organic process surfaced as a theme throughout the research. This acknowledgment of owning the culture of evidence and being flexible in how to address stakeholder expectations are central to the success and effectiveness of meeting accreditation standards.The themes identified by Copland (2010; Table 6), collaboration, trust, professional learning, reciprocal accountability, the consensus of the significant problems, and expertise, are each found at this institution. A distributed structure of leadership focuses on the interactions, rather than the actions, of those in formal and informal leadership roles (Spillane, 2008). Distributed leadership is primarily concerned with leadership practice and how leadership influences organizational and instructional improvement (Spillane, 2006). Spillane (2006) suggested that most accounts of leadership in higher education focus on people, structures, functions, routines, and roles, rather than leadership practice. This hierarchical model of leadership is abandoned in distributed leadership for an organizational structure that is focused on the goals of the group, rather than the actions of one (Copland, 2003). Distributive leadership supports the statement of Bolman and Gallos (2011) that no one person or group controls a higher education institution. That is the case at this institution.Attributes of distributed leaders who nurture a culture of evidence. The interviewees touched upon several characteristics and attributes identified by Onoye and Spillane. As noted, among the distinctive features of a leader, Onoye (2004) found that a higher education leader’s ability to work in a collaborative manner with others on campus was essential to solving problems. This supported Spillane’s (2006) definition of leadership as a relationship of social influence. Additionally, interviewee comments were aligned to Copland’s (2003) successful distributed leadership themes (Table 10). Overwhelmingly, the interviewees looked for a distributed leader to be a strong collaborator and to have expertise in fostering a culture of evidence. Less important to the interviewees were the attributes of trust, professional learning, reciprocal accountability, and consensus of significant problems. These each were about half as important as collaboration and expertise. The expectation appears to support a culture of evidence leaders who will collaborate with others and have a strong understanding of evidence-based documentation, processes, and the college systems.The distributed leader theme results of the interviewees and the researcher are compared to note any differences.Table 11.Distributed Leader Themes (Copland, 2010) vs. Researcher Observations vs. Attributes Identified by IntervieweesObservations by researcher vs. attributes identified by intervieweesDistributed Leader ThemesCollaborationTrustProfessional learningReciprocal accountabilityConsensus important problemsExpertiseResearcher159712155Interviewees15564510In this comparison (Table 11), collaboration remains a consistent theme, one that is observed by the researcher and one that is described as necessary by the interviewees. The largest separation occurred in themes of consensus of the significant problem and expertise related to a culture of evidence. Several reasons could be used to explain these differences. First, the interviewees were searching for expertise to develop the culture of evidence further. That was one of the rationales for creating a coordinator of assessment and accreditation. Collectively, they expressed the need to have an expert opinion on how to improve what they were doing and where to go as a way to improve the quality of the culture of assessment. The researcher observed a desire for feedback and improved quality; however, the observations indicated that the institution was on the cusp of turning the corner from simply documenting for documentation purposes, to documenting for continuous improvement.The second disconnected theme was the consensus of the importance of meeting accreditation standards and fostering a culture of evidence. The researcher had access to all levels of the institution, meaning, she was present at BOT meetings to discuss accreditation issues surrounding a culture of evidence, presented documented evidence to all campus constituents (e.g., president, faculty, staff, and virtual), and worked with the faculty and staff who were the front line of the documentation processes. The interviewees clearly understood the documentation and evidence needs of their own specific area or department level. They all understood that a culture of evidence is the expectation; however, they do not clearly understand the importance of the role he or she plays in building that culture of evidence.Effectiveness of the distributed leadership model and meeting culture of evidence expectations. Nearly every year since 2010, this institution has been required to submit a report to its regional accreditation body. Each report has focused on an element of a culture of evidence. The self-study completed by this institution in 2015-16 was a comprehensive document that required the institution to reflect, articulate, and provide evidence that demonstrated how the college had embraced a culture of evidence. The institution was commended in the final report for fostering a culture of evidence, for utilizing a distributed leadership model, and for the documentation of systems and processes to support a culture of evidence. The institution was asked to complete a progress report to the accreditation body in 2017 that will focus further on documenting evidence related to the strategic plan and student learning in the area of student affairs. Both of these follow-up areas have documented processes for gathering this evidence; yet, they are both new documentation collection areas for the institution. It is expected that the progress report will provide the necessary documentation as requested by the accreditation body and a distributed leadership model will be utilized to meet the culture of evidence expectation.Practical ImplicationsThe research from this case study identified several practical implications for higher education institutions. First, distributed leadership was a sufficient approach to fostering a culture of evidence at an institution. It allowed for multiple layers of people at an institution to be vested in the process, and it permitted individuals from all campus stakeholder categories to be involved in building evidence-based processes to meet accreditation expectations. Additionally, a distributed leadership model empowers individuals by solving a common problem or concentrating on a focus area of concern. This allows for ownership of the successes and failures related to creating a process, thus, creating a natural and unforced solution. Unique to this case study is the clear indication of an organic or unforced process that utilized a distributive leadership model to create and implement a culture of evidence that met stakeholder and national interests. The emergent process was not a conscious process because the individuals applied initiatives naturally and collaboratively, thus, following a symbiotic and reciprocal relationship in building a community of distributive leaders working together to create a culture of evidence.Secondary mechanisms may be the reason why interviewees believed the culture of evidence was organic in nature. There were several behind-the-scenes activities that aided in supporting a distributed leadership model and fostered a culture of evidence. For example, the committee structures were designed to include all constituents of the college as a way to share the governance of the institution. Similarly, each silo of the college developed its own way of documenting the evidence, another element of an organic process. This flexibility in building the evidence processes legitimized the reasons for creating an in-house system that was responsive to the needs of the institution and those who were responsible for the documented evidence. As a small institution, the secondary mechanisms overlapped in many areas, as many individuals served on various committees and/or attended departmental meetings regularly, yet, are not part of the department. The richest element of the secondary mechanisms is the movement toward a culture of quality where the faculty and staff are interested in improving the quality of their evidence and documentation. This is where the expertise is needed and desired.Having a dedicated position that focuses on evidence-based documentation is essential to improving the culture of evidence. Unique to this institution is how that position is a faculty position, yet, not a teaching faculty position. The position is afforded access to meetings such as the collective bargaining unit, as well as invitations to higher administration meetings such as the BOT. Many institutions create positions dedicated to accreditation and assessment where the position reports to the institutional research office or the president. This is not the case with the position in this case study. The expert at this institution is a tenure-track faculty member who has an office among the faculty in a classroom building. This permits faculty and staff to have immediate access to the expert at any time. Finally, it is important to mention the person in this position was a direct hire, from outside of the institution. Moving or promoting a person to be the expert within an institution may not be as successful.Theoretical ImplicationsThis study supported Denison’s (1990) conclusion from a variety of studies; culture plays a significant role in the effectiveness of organizations. Distributive leaders, as identified in this study and defined by Suskie (2008) and Schein (2010) as influencing a culture of evidence, embrace personal commitment, stimulate interest, recruit people, value professional development, identify the time to engage, monitor the progress, and celebrate achievements. Distributive leadership exhibited in this case study further supported Gressick and Derry’s (2010) and Spillane’s (2006) definition of distributed leadership that leadership does not exist in the offices of the formal leaders in the institution, as traditional definitions of higher education leadership suggest. A distributed understanding of leadership identifies that leading institutions in higher education require many leaders. A distributed leadership perspective acknowledges and incorporates the work of all the individuals who play a part in the practice of leadership (Spillane, 2006). Distributive leaders identified in this case study expand the understanding of who actually promotes and fosters a culture of evidence change within a higher education institution.As indicated in the case study, Copland’s (2003) three preconditions were apparent at this institution. These included an institution that embodied collaboration, trust, and reciprocal accountability; a strong consensus regarding the significant problems facing the institution; and, the need for rich expertise to improve teaching and learning among all who work at the institution. This case study also demonstrated how a distributive leadership model was effective in meeting the expectations of a culture of evidence by an outside accreditation body.Recommendations for Future ResearchThere are two suggestions to be explored with future research. First, the regional accreditation body fully affirmed a culture of evidence at this institution. This was accomplished through a distributed leadership model that was described as an organic process to solving a major problem for the institution. The top administrators (e.g., president, VPs, and BOT) participated in building the culture of evidence, but the actual culture of evidence processes and systems were identified and implemented by various leaders (e.g., faculty, staff, and committees) of the institution. The development of the evidence-related processes were described as an organic occurrence. Thus, one suggestion is to research what elements need to be in place for an organic process to shape a healthy culture of evidence. Other institutions have healthy cultures of evidence. However, they are not always formed from an internal organic process.The second suggestion for further research is to identify the intrinsic values of institution members completing documentation and processes that support a culture of evidence without incentives or remuneration. At this institution, there was an importance of beginning the culture of evidence where individuals were most comfortable, at the course or department level. This could be why faculty and staff felt invested in the early days of developing a culture of assessment. Monitoring the culture of evidence and distributive leadership model of this institution, now that the accreditation process has reaffirmed the institution as having an adequate culture of evidence, has the potential to explore these intrinsic values further.LimitationThis case study is narrow in focus, which is a limitation. A small, two-year institution may have adequate personnel, structures, and communication styles to support a distributed leadership structure. However, a larger institution with a different organizational structure may not be able to implement a distributive leadership concept, either intentionally or symbiotically. Similar case studies should be conducted to determine how the size and type of the institution (e.g., public, private, or for-profit) would support or not support a distributed leadership structure that fosters a culture of evidence.ConclusionInstitutions continue to struggle to meet stakeholder expectations related to a culture of evidence. This case study identified how the distributive leaders of an institution organically fostered a culture of evidence that was effective in meeting stakeholder and national expectations. Flexibility, critical thinking, and responsive planning on the part of the distributed leaders brought about a systematic evidence process that was created in-house. By employing such a system, the stakeholders who documented the evidence were well versed in the systematic processes and were able to collaborate with the distributed leaders to improve the quality of the culture of evidence further. Notably, this distributed leadership model was functional in a small public two-year institution that was accountable to several external and internal constituents. Most notably, it was accountable to the accrediting body of the institution. As the culture of evidence matures, the individuals involved in the documentation process have been eager to improve the quality of their own evidence and sought to have feedback provided by the distributed leaders. This has resulted in very healthy discussions centered on the term peer evaluation and whether the distributed leaders of the culture of evidence have the right or authority to evaluate a colleague’s work. Again, another indication of a maturing culture of evidence is one that has been built via distributed leaders.ReferencesAarons, G. A., & Palinkas, L. A. (2007). Implementation of evidence-based practice in child welfare: Service provider perspectives. Administration and Policy in Mental Health & Mental Health Services Research, 34, 411-419.AASCU. (2014). Fiscal and state policy issues affecting postsecondary education. US economic forecast. Retrieved from: Organization. (2011, 2012, 2013, 2014, 2016). Name withheld to maintain anonymity of the research site. Adler, P. A., & Adler, P. (1987). Membership roles in field research. Thousand Oaks, CA: Sage.Altbach, P. G., Gumport, P. J., & Berdahl, R. O. (2011). American higher education in the twenty-first century. Social, political, and economic challenges (3rd ed.). Baltimore, MD: John Hopkins University Press.Alvesson, M. (2003). Methodology for close-up studies—Struggling with closeness and closure. Higher Education, 46, 167-193.Alvesson, M. (2012). Understanding organizational culture (2nd ed.). Thousand Oaks, CA: Sage.Anderson, G. L., & Herr, K. (1999). The new paradigm wars: Is there room for rigorous practitioner knowledge in school and universities? Educational Researcher, 28. 12-21, 40.Anderson, G. L., Herr, K., & Nihlen, A. (1994). Studying your own school. Thousand Oaks, CA: Corwin.Ashkanasy, N., Vileron, C., & Peterson, M. (2011). Handbook of organizational culture and climate (2nd ed.). Thousand Oaks, CA: Sage. Associated Press. (2014, December 19). Feds release “framework” to rate colleges instead of actual system. USA Today. Retrieved from: feds-release-framework-to-rate-colleges-instead-of-actual-system/Astin, A., & Astin, H. (2000). Leadership reconsidered: Engaging higher education in social change. Battle Creek, MI: Kellogg Foundation.Barney, J. (1986). Organizational culture. Academy of Management Review, 11(3), 656-665.Banta, T., & Moffett, M. S. (1987). Performance funding in Tennessee: Stimulus for program improvement. In D.F. Halpern (Ed.), Student outcomes assessment: What institutions stand to gain. San Francisco, CA: Jossey-Bass.Banta, T., & Palomba, C. (1999). Assessment essentials: Planning, implementing and improving assessment in higher education. San Francisco, CA: Jossey-Bass.Bass, B. M. (1985). Leadership and performance beyond expectations. New York, NY: The Free Press.Bennett, N., Wise, C., Woods, P., & Harvey, J. (2003). Distributed leadership. Oxford, UK: National College for School Leadership.Bolden, R. (2011). Distributed leadership in organizations: A review of theory and research. International Journal of Management Reviews, 13(3), 251-269. Bolman, L., & Gallos, J. V. (2011). Reframing academic leadership (1st ed.). Jossey-Bass.Brittingham, B. (2008). An uneasy partnership: Accreditation and the federal government. Change. 32-39.Burke, J. C., & Associates. (2005). Achieving accountability in higher education: Balancing public, academic, and market demands. San Francisco, CA: Jossey-Bass.Cerqueira, M. (2008). A literature review on the benefits, challenges, and trends in accreditation as a quality assurance system. Ministry of children and family development.Copland, M. A. (2003). Leadership of inquiry: Building and sustaining capacity for school improvement. Educational Evaluation and Policy Analysis, 25(4), 375-395.Creswell, J. (1998). Qualitative inquiry and research design: Choosing among the five traditions. Thousand Oaks, CA: Sage.Creswell, J. (2002). Educational research: Planning, conducting, and evaluating quantitative and qualitative approaches to research. Upper Saddle River, NJ: Merrill/Pearson Education.Creswell, J. (2012). Educational research: Planning, conducting and evaluation quantitative and qualitative research (4th ed.). New York, NY: Pearson.Culp, M., & Dungy, G. (2012). Building a culture of evidence in student affairs: A guide for leaders and practitioners. Washington, DC: National Association of Student Personnel Administrators.Deal, T. E. (1995). School as cultural arenas: Symbols and symbolic activity. In S. B. Bacharach & B. Mundell (Eds.), Images of schools: Structures and roles in organizational behavior. Thousand Oaks, CA: Sage.Denison, D. (1990). Corporate culture and organizational effectiveness. New York, NY: Wiley.Driscoll, A., de Noriega, D. C., & Ramaley, J. (2006). Taking ownership of accreditation: Assessment processes that promote institutional improvement and faculty engagement. Sterling, VA: Stylus.Duderstadt, J. (2007). A view from the helm: Leading the American university during an era of change. Ann Arbor, MI: University of Michigan Press.Duncan, A. (2013). The coming crossroads in higher education. Remarks to the State Higher Education Executive Officers Association Annual Meeting. Retrieved from , C. A., Millett, C., & Payne, D. (2006). A culture of evidence: Postsecondary assessment and learning outcomes. Recommendations to policymakers and the higher education community. Retrieved from , J. (2015). Accreditation: What it does and what it should do. Change, 37(3), 35-41.Evans, R. (1996). The human side of school change. San Francisco, CA: Jossey-Bass.Ferlie, E. B., & Shortell, S. M. (2001). Improving the quality of health care in the United Kingdom and the United States: A framework for change. Milbank Quarterly. 79(2), 281-315.Fetterman, D. M. (1998). Ethnography: Step by step (2nd ed.). Newbury Park, CA: Sage.Flint, N. (2000, December). Culture club: An investigation of organizational culture. Paper presented at the Annual Meeting of the Australian Association for Research in Education, Sydney, Australia.Fraenkel, J. R., Wallen, N. E., & Hyun, H. H. (2012). How to design and evaluate research in education (8th ed.). New York, NY: McGraw-Hill.Gelfand, M. J., Erez, M., & Aycan, Z. (2007). Cross-culture organizational behavior. Annual Review Psychology, 58, 479-514.Gioia, D., & Thomas, J. (1996). Identity, image, and issue interpretation: Sense-making during strategic change in academia. Administrative Science Quarterly, 41(3), 370-403. Gressick, J., & Derry, S. J. (2010). Distributed leadership in online groups. International Journal of Computer-Supported Collaborative Learning, 5(2), 211-236.Guion, L. (2002). Triangulation: Establishing the validity of qualitative studies. Retrieved from , M., & Atkinson, P. (1995). Ethnography: Practices and principles (2nd ed.). New York, NY: Routledge.Hartley, D. (2007). The emergence of distributed leadership in education: Why now? British Journal of Educational Studies, 55, 202-214.Hatch, J. A. (2002). Doing qualitative research in education settings. Albany, NY: State University of New York Press.Hauptman, A. (2009). Participation, persistence, and attainment rates: The U.S. standing. Boston College Center for International Higher Education. Retrieved from Number52/p19_Hauptman.htmHesse-Biber, S., & Leavy, P. (2011). The practice of qualitative research (2nd ed.). Thousand Oaks, CA: Sage.Hollowell, Middaugh, M., & Sibolski, E. (2006). Integrating higher education planning and assessment: A practical guide. Society for college and university planning.Ingram, R. T. (1993). Governing public colleges and universities: A handbook for trustees, chief executives, and other campus leaders. San Francisco, CA: Jossey-Bass.Islam, F., & Crego, E. (2014). The compelling need to improve the higher education value equation. Huffington Post. Retrieved from , R. M. (1983). The change masters: Innovations for productivity in the American corporation. New York, NY: Simon and Schuster.Kerr, C., & Gade, M. (1989). The guardians: Board of trustees of American colleges and universities: What they do and how well they do it. Washington, DC: Association of Governing Boards of University and Colleges.Kezar, A. J., Carducci, R., & Contreras-McGavin, M. (2006d). A world anew: The latest theories of leadership. ASHE Higher Education Report, 31(6), 31-70. Klein, K. J., & Sorra, J. S. (1996). The challenge of innovation implementation. Academy of Management Review, 21, 1055-1080.Kottak, C. (2006). Mirror for humanity. New York, NY: McGraw-Hill.Kotter, J. P. (1995). Winning at change: Leader to leader, 10, 27-33.LeMon, R. E. (2004). The changed social compact for public higher education: What do the public and lawmakers need? In Proceedings of a National Symposium: A new compact for higher education: Accountability, deregulation, and institutional improvement. Austin, TX: The University of Texas System.Lumby, J. (2013). Distributed leadership: The uses and abuses of power. Educational Management Administration & Leadership, 41(5), 581-597. Maki, P. L. (2004). Assessing for learning: Building a sustainable commitment across the institution. Sterling, VA: Stylus and American Association for Higher Education.Martin, J. (2002). Organizational culture: Mapping the terrain. Thousand Oaks, CA: Sage.Mentkowski, M. (1991). Creating a context where institutional assessment yields educational improvement. In J. S. Stark, & A. M. Thomas (Eds.), Assessment and program evaluation. Needham Heights, MA: Simon & Schuster.Merriam, S. B. (1998). Qualitative research and case study applications in education. San Francisco, CA: Jossey-Bass.Middaugh, M. F. (2012). Compliance workshop. MSCHE Annual Conference. Philadelphia, PA.Middle State Commission on Higher Education (MSCHE). (2005). Assessing student learning and institutional effectiveness. Retrieved from: States Commission on Higher Education (MSCHE). (2007). Student learning assessment. Options and resources (2nd ed.). Retrieved from State Commission on Higher Education (MSCHE). (2014, December). Newsletter Retrieved from , V. S. (2009). Accountability, accreditation, and continuous improvement: Building a culture of evidence. New Directions for Institutional Research, 143, 17-27.National Commission on Accountability in Higher Education. (2005). Accountability for better results: A national imperative for higher education. Boulder: CO: State Higher Education Executive Officers (SHEEO).New Leadership Alliance. (2012). Committing quality. Guidelines for assessment and accountability in higher education. Washington, DC: New Leadership Alliance for Student Learning and Accountability.Ogbonna, E., & Harris, L. C. (2000). Leadership style, organizational culture, and performance: empirical evidence from UK companies. International Human Resource Management, 11, 766-788.Onoye, K. J. (2004). A case study of a successful urban school: Climate, culture and leadership factors that impact student achievement. Unpublished doctoral dissertation, University of Southern California.Peterson, M. W., & Einarson, M. K. (2001). What are colleges doing about student assessment? Does it make a difference? Journal of Higher Education, 629-699.Rudolph, F. (1990). The American college and university: A history. Athens, GA: The University of Georgia Press.Sagiv, L, Schwartz, S. J., & Arlieli, S. (2011). Personal values, national, culture, and organization: insights applying the Schwartz value framework.Schein, E. H. (1984). Coming to a new awareness of organizational culture. Sloan Management Review, 25(2), 3.Schein, E. (1990). Organizational culture and leadership. San Francisco, CA: Jossey-Bass. Schein, E. (2010). Organizational culture and leadership (4th ed.). San Francisco, CA: Jossey-Bass.Schneider, B., Ehrhart, M., & Macey, W. (2013) Organizational climate and culture: An introduction to theory, research, and practice. New York, NY: Taylor & Francis.Sibolski, E. (2014). Accreditation liaison officer meeting. Washington, DC: MSCHE annual conference.Siddique, A., Aslam, H. D., Khan, M., & Fatima, U. (2011). Impact of academic leadership on faculty’s motivation, and organizational effectiveness in higher education system. International journal of business and social science, 2(8), May. Retrieved from , J. P. (2006). Distributed leadership (1st ed.). San Francisco, CA: John Wiley & Sons.Spillane, J. P., & Harris, A. (2008). Distributed leadership through the looking glass. Management in Education. Thousand Oaks, CA: Sage.Spillane, J. P., Halverson, R., & Diamond, J. B. (2001). Towards a theory of leadership practice: A distributed perspective. Institute for Policy Research Working Article. Northwestern University.Spillane, J. P., Halverson, R., & Diamond, J. B. (2004). Towards a theory of leadership practice: A distributed perspective. Journal of Curriculum Studies, 36(1), 3-34. Spradley, J. P. (2004). Participant observation. New York, NY: Holt, Rinehart, and Winston.Suskie, L. (2009). Assessing student learning: A common sense guide (2nd ed.). San Francisco, CA: Jossey-Bass.Suskie, L. (2014). Five dimensions of quality: A common sense guide to accreditation and accountability. [Kindle Desktop Edition]. Jossey-Bass.Suskie, L. (2013). Leadership, learning, service, and research. Cardinal Stritch Summer Institutes. Retrieved from , L. (2012). Sources of resistance: Leadership and fear. [Blog posting]. Retrieved from , A. S., Zhand, Z-X, Wang, H., Xin, K. R., & Wu, J. B. (2006). Unpacking the relationship between CEO leadership behavior and organizational culture. Leadership Quarterly, 17, 113-137.Whitehead, T. L. (2005). Basic classical ethnographic research methods. Ethnographically informed community and cultural assessment research systems (EICCARS) working papers series. College Park, MD: University of Maryland. Retrieved from , M. A. (2007). Leadership in higher education. Academic Leadership: The Online Journal, 2(1), 13.Wolcott, H. F. (1994). Transforming qualitative data. Thousand Oaks, CA: Sage.Wood, S. (2006). Faculty interviews: A strategy for deepening engagement in inquiry. In A. Driscoll & D. Cordero de Noriega (Eds.), Taking ownership of accreditation: Assessment processes that promote institutional improvement and faculty engagement. Sterling, VA: Stylus.Yin, R. K. (1984). Case study research: Design and methods (1st ed.). Beverly Hills, CA: Sage.Yin, R. K. (1993). Applications of case study research. Applied Social Research Methods, 34. Thousand Oaks, CA: Sage.Yin, R. K. (1994). Case study research: Design and methods (2nd ed.). Beverly Hills, CA: Sage.Yin, R. K. (2003). Case study research: Design and methods (3rd ed.). Thousand Oaks, CA: Sage.Appendix AInterview GuideDescribe your experience with assessment, accountability, and developing a culture of evidence at this institution.Describe how administrators, faculty and staff are involved in building a culture of evidence at this campus.Describe who you believe are the leaders who foster a culture of evidence at this institution. What is his/her/their role within this culture? Describe how these leaders have impacted the culture of evidence.Describe the leadership qualities that you believe will bring forth the best work and efforts needed to foster a culture of evidence.Describe suggestions for leaders associated with fostering a culture of assessment. What are three to five expectations you have for leaders involved with building a culture of evidence?Describe how a culture of evidence has been valued on this campus.Describe how fostering a culture of evidence has become a priority for this institution.Is there anything else you would like to share?Appendix BWord FrequencyTable A.1 Common words from all interviewees.TermsFrequencyassessment (documentation)247people159evidence95committee90faculty86everyone67different61division56organic55templates53leader51course48process48reports43working43probably42meetings38academic37student37culture35involved34better33program33trying33anything32person32campus31learning31college29important29started26definitely25wanted25believe24change24Table A.1 continuedcommunicate24president24departments23members21understand21institution20necessarily19outcomes18administration16administrator14answer14anybody14experience14perfect14quality14review14together14improve13willing13driven12strong12conversations10decision10institutional10training10Appendix CFigure A.1 Visual representation of the word frequency.Note: This figure illustrates the dominant terms documented during the interview process. ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download