CONTENT ANALYSIS OF TROPICAL CONSERVATION …



CONTENT ANALYSIS OF TROPICAL CONSERVATION EDUCATION PROGRAMS: ELEMENTS OF SUCCESS

Contents

Method

Results

Discussion

ACKNOWLEDGMENTS

TABLE 1. Description of Attributes for Content Analysis

TABLE 2. Actual Number of Articles Meeting Success Criteria for Each Attribute Category

TABLE 3. Results of Chi-Square Analysis

REFERENCES

  Content Analysis

  Data Collection and Analysis

  Success Rates Reported

  Attributes Correlating With Program Success

ABSTRACT: Evidence of success is needed to justify the use of educational approaches as a tool to achieve conservation goals. A content analysis of 56 reports on tropical conservation education programs published between 1975 and 1990 revealed that fewer than half of the programs were successful in achieving their objectives. The use of either formative or long-term evaluations in the program design was correlated with significantly higher rates of program success. Program longevity was also associated with program success, suggesting a need for long-term data collection in assessing the value of conservation education programs. Other program attributes, such as location, sponsorship, and form of publication used for information dissemination, were not correlated with success.

International interest in sustaining natural tropical ecosystems has increased over past decades as the global consequences of tropical forest loss have become more apparent (Pearl, 1989; Wilson, 1989). Nonsustainable human land-use patterns have been cited as the primary cause of tropical habitat degradation (IUCN/UNEP/WWF, 1980, 1991). Industrialized nations have targeted large capital investments to equatorial regions in an effort to improve human interactions with the tropical environment (Feder & Noronha, 1987). The 1992 Earth Summit held in Rio de Janeiro further encouraged developed nations to direct large sums of money to tropical resource conservation projects (United Nations Conference on Environment and Development, 1993). How and where funds earmarked for tropical conservation projects are distributed depends on how funding agency decision makers and the public perceive a program's anticipated success in meeting conservation goals.

Many approaches to altering land-use patterns exist, and debate continues over which methods yield the best results, offer long-lasting solutions, and cost less for investors. Conservation education has been recommended by many education researchers, instructors, and administrators as a labor-intensive but cost-effective means of effecting behavioral change (e.g., Dietz & Nagata, 1986; Jacobson, 1987a, 1991). Supporters of education as a critical conservation tool have asserted that financial or material incentives in lieu of education measures typically require an expensive initial investment and provide only short-term solutions unless subsequent costly incentive packages are provided (Western et al., 1989). McNeely (1988) suggested that, in the short term, economic incentives outperform education efforts in changing human land-use patterns. This implies that education programs require time investments of several years to achieve conservation goals (McNeely, 1988; McNeely & Miller, 1992). If true, such assertions should limit the appeal of educational approaches for conservation-oriented funding organizations, which must often demonstrate program success at relatively short, fiscal-year intervals to justify continued funding.

Proponents of tropical conservation education programs lack documentation supporting the benefits of education programs (Byers, 1995). Tropical programs tend to occur in remote locations, inhibiting outside access to direct information on program needs and achievements. Secondary information sources, such as published reports, seldom are published in established education or conservation publications, further limiting program information available to potential program sponsors (Wood, Wood, & McCrea, 1985). The literature often lacks information essential to making informed decisions on effective funding allocation and appropriate program improvements. Program evaluations are rare or unreported (Jacobson, 1990). Yet program evaluation is necessary to make informed and objective decisions about education program needs and progress. Evaluation data can be used to determine how to improve existing programs, to develop appropriate new strategies, and to allocate effectively limited capital resources. It is therefore important to determine the contribution of evaluation approaches as well as other program elements to the success of programs.

To better understand the effectiveness of tropical conservation education programs as a conservation tool, we performed a content analysis on program reports published in mainstream literature between 1975 and 1990. Content analysis is a method for testing hypotheses about the contents of reports or other literature through categorization and quantitative analysis of statements obtained directly from the literature (Burns-Bammel, Bammel, & Kopitsky, 1988). Our objectives in the analysis were two: (a) to estimate rates of program success reported and (b) to investigate potential correlates to reported program success, including geographic location, sponsors, duration, type of publication used for dissemination, and evaluation method (formative, summative, or long term).

Method

Content Analysis

Content analysis is a technique for data collection, description, and interpretation and is accomplished through the use of objective language, categorization, and systematic surveys (Burns-Bammel et al., 1988). Content analysis offers several advantages as a research tool over more direct methods of data collection for an analysis of field-based education programs. First, content analysis is time and cost-effective, because first-hand field data must be collected from many different locations to have a sufficient sample size for analysis. Such fieldwork would require expensive and time-consuming travel, as well as substantial periods of time spent at each site to collect program data before, during, and after program initiation. In contrast, a content analysis can provide the same quantity of data in several weeks or months of library, telephone, and mail surveys. In addition, data collector bias is reduced because the collected data are reported (a) after program completion, (b) in the presence of an external program evaluator, and (c) without prior knowledge of the report's use in an analysis. Content analysis is well suited to analyses of trends, because the number of catalogued items can span a large temporal or spatial field: In this study we analyzed program results published during more than a decade and occurring in tropical areas throughout the world.

Data Collection and Analysis

Reports selected for the study included all available articles describing conservation education programs in tropical countries published between 1975 and 1990. References used in the study were located through the Education and Research Information Clearinghouse (ERIC) database, which includes proceedings from workshops and symposia, journal articles, book sections, clearinghouse reports, and foreign documents. Key words and phrases related to conservation education and tropical regions of the world were used to locate items in the database.

Data collected from each report were recorded and categorized on the basis of predetermined attribute descriptions to address the two objectives-estimation of program success rates and identification of program attributes correlated to program success. In Table 1 attributes considered in the analysis and the categories used for each attribute are described.

Objective 1: Estimation of Program Success

Two criteria were used to define program success: (a) reported achievement of a program's goals and objectives and (b) additional positive results reported to have occurred. All reported objectives were compared with all listed outcomes for each program. Programs in which at least half (50%) of the stated objectives were met were described as having achieved their objectives. This liberal standard was used to avoid underestimating program success. Reports were then searched for other indications of positive outcomes regardless of whether they were stated as objectives. Programs were considered successful if they met either of these two criteria.

Objective 2: Factors Correlated With Program Success

Seven possible correlates to success were considered (Table 1): geographic region in which the program took place, program sponsor, duration of the program at the time of the report, type of publication in which results were reported, and evaluation method (formative, summative, and long term).

Program locations were categorized by continent. Programs taking place on island nations were placed in the category of the nearest neighbor continent. A regional category world was created to represent programs designed to address a global audience. Sponsorship was divided into five categories. If the sponsor was a single agency, it was categorized as an international, national, or private group. Multiple-agency sponsorship required the involvement of several agencies at different levels, such as a coordinated effort by a national group and an international nonprofit organization. Articles reviewed were categorized by the publication type in which they were located-book, proceedings from a conference or workshop, or journal. Program duration was identified as shorter than 3 years, 3 years or longer, or unreported if no information was given.

Three types of program evaluation were identified for reports. Evaluation occurring while a program was underway was defined as formative evaluation (Passineau, 1975). Immediate postprogram evaluation was termed summative evaluation. Evaluation reported to have occurred at least 6 months after program completion was defined as follow-up or long-term evaluation. Analyses of potential correlates to program success were conducted using chi-square statistics (Zar, 1984).

Results

The search generated 56 published conservation education reports suitable for content analysis (Table 2). Half of all program reports were located in books, and the other half were divided equally between published proceedings and journals. The largest percentage of programs reported, 38%, occurred in Africa; 23% occurred in Asia, 21% occurred in Latin America, 13% occurred in Australia, and 5% addressed a global audience.

Sponsorship of education programs was balanced between international (25%), national (29%), and multiple-agency (23%) projects. Nine programs, or 16%, were privately sponsored, and four reports did not mention a sponsor. Sponsorship of programs was unequally distributed among regions. For example, more than half of all Asian programs (53%) listed single-agency, national-level sponsors, whereas more Latin American programs (42%) reported multiple-agency sponsors. No Latin American case studies listed a private sponsor, while more than half of all Australian programs (43%) listed private sponsors.

About half of all reports (54%) indicated program longevity. Two thirds (66%) of the programs reporting program duration had been in progress for more than 3 years. Programs in Latin America reported the highest rate of programs lasting 3 years or more (88%). All multiple agency-sponsored programs reporting had been in existence for at least 3 years, whereas other sponsoring groups reported a greater proportion (33% or more) of short-term programs.

Success Rates Reported

Fewer than half of all programs (45%) reported achievement of at least half of their specified objectives; 68% of these also cited unanticipated positive outcomes resulting from the program. Six additional programs also recorded unanticipated positive results. Programs having achieved objectives also reported significantly higher rates of additional positive outcomes than programs in which objectives were not achieved, Chi2(1,N = 56) = 11.60, p < .01.

Attributes Correlating With Program Success

No significant differences were found when we compared program location, publication type, program sponsor, and program longevity with indexes of success (Table 3). However, programs with a duration of more than 3 years had twice the rate of success of briefer programs, suggesting a trend, Chi2(1,N = 56) = 5.39, p < .068.

The only parameters showing clear relationships with success rates were the evaluation techniques used during the program. Some form of evaluation was used in fewer than half of all reports (45.6%). Evidence of formative evaluation was most common (32.1% of all reports). Formative evaluation procedures included periodic or spot checks to determine progress toward specified objectives through interviews with educators, program staff, and members of the target audience (e.g., Dyasi, 1982; Jacobson, 1987b; Mosha & Thorsell, 1984).

Summative evaluation was less commonly noted (23.1%). Summative evaluations were based on written or oral postcourse tests to determine changes in knowledge or attitudes. For behavioral objectives, summative evaluation was performed by tabulating the anticipated results of actions. For example, in one case a primary objective was to have people plant a given number of seedlings. Part of the summative evaluation process was to count the number of seedlings planted in comparison to stated objectives (Earnshaw & Skilbeck, 1982).

Long-term evaluation occurred in one quarter of all program reports. An example of long-term evaluation was return visits to the site of an education program 6 months after its conclusion to measure longer term behaviors resulting from the project. This method was applied to a course targeting a change in land-use practices on rapidly eroding soils in the Philippines (Fujisaka, 1989). Farmers were observed months later to determine if methods taught were still being implemented.

Articles reporting the use of formative evaluation were significantly more likely to report that the program had achieved objectives and positive outcomes, Chi2(1,N = 56) = 6.60, p < .01, and Chi2(1,N = 56) = 8.82, p < .003, respectively. Likewise, reports of long-term evaluation were associated with higher rates of program success, based on the two indexes [objectives achieved, Chi2(1,N = 56) = 15.05, p < .001; other positive outcomes, Chi2(1,N = 56) = 9.98, p < .002]. Surprisingly, program success was not correlated with summative evaluation procedures (Table 3). Articles documenting the use of any two forms of evaluation, however, did report significant increases in program success.

Discussion

Evidence of success is needed to justify the use of education programs as a tool for achieving conservation goals. Many tropical conservation programs are funded by agencies and institutions located in temperate industrial nations, far from targeted audiences. These institutions depend on development "experts" for guidance and members of the general public for donations, who in turn rely on published materials for information. For programs occurring in areas distant from potential donors, published reports of progress may be the primary source used to formulate opinions regarding the status, progress, and success and failure rates of various conservation strategies. In this content analysis we surveyed the types of information contained in published reports of tropical conservation education programs, the degree to which various types of success are being reported, and possible correlates to success.

Our results show that rates of program success were low, even using rather lenient criteria to describe program success. These low success rates indicate that present programs are ineffective or weak mechanisms of change for meeting conservation objectives. This finding is reflected in a recent report to the Biodiversity Support Program (1994) funded by the U.S. Agency for International Development. Byers (1995) concluded that traditional environmental education has failed.

Most program reports contained no information regarding evaluation techniques implemented before, during, or after the program. Yet many forms of evaluation, both qualitative and quantitative, can help improve education programs. In this study we considered three broad types of evaluation with various advantages. Formative evaluation assists with immediate modifications of program design and implementation but does not describe a program's overall achievements. Summative evaluation, which occurs after a program is completed, provides information about program success or failure but offers little information about causes for these results. Follow-up evaluation offers the opportunity to monitor the long-term effects of program strategies. Although the use of summative evaluation had no significant effect on program success, the use of either formative or long-term evaluation increased program success dramatically. It appears from this study that conducting formative evaluations while a program is underway and follow-up evaluations to monitor program impact maximizes a program's success. Yet studies indicate that long-term evaluation is the least likely form of evaluation to be conducted, because of the time and resources required to implement it (Jacobson, 1987a).

Many of the evaluation techniques used in cases we reviewed for this study consisted of qualitative, periodic interviews with teachers, students, or other project participants. The informal nature of evaluations conducted in these successful programs implies that time-, labor-, and capital-intensive techniques are not essential to obtain useful information for program improvements, although systematic program assessment may well improve programs further.

Program length correlated positively with program success. It was not clear, however, whether duration itself or the potential for multiple assessment opportunities was the cause for this relationship. In fact, our data support the latter because 75% (n = 21) of the programs in operation for 3 or more years described the use of follow-up evaluation techniques. Continued support of education programs despite initial low success rates is defensible if time is a critical factor in obtaining long lasting results (Jacobson, 1995).

This analysis reveals that published reports describing tropical conservation education programs between 1975 and 1990 show low success rates at achieving objectives and positive outcomes and correspondingly low rates of program evaluation. The lack of program evaluation remains a problem. For example, of 14 ongoing conservation education programs in Africa documented by the Biodiversity Support Program in 1994, only 3 were implementing evaluation or monitoring approaches.

The inclusion of formative and follow-up assessment procedures in existing and future tropical conservation education programs, and a more complete description of programs in published reports, should reveal the advantages of conservation education strategies more clearly to the public and to sponsors of conservation programs. Advocates of educational approaches to conservation claim that these methods provide less costly and more lasting results than incentive-based programs. Results of program evaluations can provide the hard data to support such claims.

ACKNOWLEDGMENTS

We thank M. McDuff and J. Norris for insightful reviews of this manuscript. This article is Florida Agricultural Experiment Station Journal Series No. R-05270.

TABLE 1. Description of Attributes for Content Analysis

Attribute Categories Content analysis criteria

for categorization

Program objectives Yes At least 50% of stated

achieved objectives were reported

to have been successfully

achieved.

No Less than 50% of stated

objectives were reported

to have been achieved.

Other positive Yes Positive products, not

outcomes occurred included as specific

objectives, were reported

to have occurred.

No No additional,

unanticipated positive

outcomes were reported

to have occurred.

Publication type Book Articles obtained from an

edited book that was not

written solely as the

result of a conference.

Journal Article obtained from a

peer-reviewed journal

(e.g., The Journal of

Environmental Education).

Proceedings Article obtained from

conference proceedings.

Sponsorship Multiagency Sponsors include more than

one agency type (e.g., NGO

and government agency).

International Primary sponsor is an

international organization

(e.g., World Bank, UNEP).

National Primary sponsor is a

national-level agency

(e.g., Kenya Parks and

Wildlife Department).

Private Primary sponsor is a

private donor

(e.g., private school or

individual contributor).

Not reported No information about

sponsor is included.

Region Africa Project occurs in tropical

Africa, including

Madagascar.

Asia Project occurs in tropical

India or east of India.

Australia Project occurs in

Australia or New Zealand.

Latin America Project occurs in tropical

South or Central America

or the Caribbean.

World Project oriented to a

tropical global audience.

Program length > 3 years Program existed for at

least 3 years at the time

the article was written.

< 3 years Program existed for fewer

than 3 years at the time

the article was written.

Not reported No indication of the

program length was

mentioned in the article.

Summative evaluation Yes Article included

indications of summative

evaluation procedures

used in the program.

No Article did not include

indications of summative

evaluation procedures

used in the program.

Formative evaluation Yes Article included

indications of formative

evaluation procedures

used in the program.

No Article did not include

indications of formative

evaluation procedures used

in the program.

Long-term evaluation Yes Article included

indications of follow-up

evaluation procedures used

in the program.

No Article did not include

indications of follow-up

evaluation procedures used

in the program.

2 or more evaluation Yes Article included

types indications of two or more

evaluation procedures used

in the program.

No Article did not include

indications of two or more

evaluation procedures

used in the program.

TABLE 2. Actual Number of Articles Meeting Success Criteria for Each Attribute Category

Legend for table:

A - Objectives achieved (n = 25)

B - Objectives not achieved (n = 31)

C - Positive outcomes occurred (n = 23)

D - No positive outcomes occurred (n = 33)

Attribute n A B C D

Publication type

Book 29 11 18 11 18

Journal 14 9 5 8 6

Proceedings 13 5 8 4 9

Region

Africa 21 8 13 7 14

Asia 13 5 8 7 6

Australia 7 4 3 1 6

Latin America 12 7 5 7 5

World 3 1 2 1 2

Sponsorship

Multiagency 13 8 5 8 5

International 14 6 8 7 7

National 16 9 7 7 9

Private 9 1 8 1 8

Not reported 4 1 3 0 4

Program length

At least 3 years 21 15 6 14 7

Fewer than 3 years 9 3 6 4 5

Not reported 26 7 19 5 21

Summative evaluation

Yes 7 5 2 4 3

No 49 21 28 19 30

Formative evaluation

Yes 18 13 5 13 5

No 38 12 26 10 28

Long-term evaluation

Yes 14 13 1 11 3

No 42 12 30 11 31

2 or more evaluation

types

Yes 12 11 1 8 4

No 44 15 29 14 30

TABLE 3. Results of Chi-Square Analysis

Legend for table:

A - df

B - Chi2

C - p

Objectives Other positive

achieved outcomes

Attribute A B C B C

Publication type 2 2.48 ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download