Name of Test:



At-a-Glance Test Review: Oral and Written Language Scales: Listening Comprehension and Oral Expression (OWLS)

|Name of Test: Oral and Written Language Scales: Listening Comprehension and Oral Expression (OWLS) |

|Author(s): Elizabeth Carrow Woolfolk |

|Publisher/Year: 1995 American Guidance Service, Inc. |

|Forms: only one |

|Age Range: 3 years through 21 years. |

|Norming Sample: The sample was collected in 1992-1993, with tryout in 1991. |

|Total Number: 1 985 Number and Age: The students ages ranged from 3 to 21 years. There were 13 age groups as follows: 6-month age intervals for ages 3 years, 0 months to 4 years, 11 months; |

|1-year intervals for ages 5 years, 0 months to 11 years, 0 months; and then by age groups 12-13, 14-15, 16-18, and 19-21. All ages have N=100 or more persons. Younger children, ages 3 and 4, |

|were given only the Listening Comprehension and Oral Expression Scales, Location: 74 sites, Demographics: Demographics are reported by gender, geographical region, race/ethnicity and |

|socioeconomic status (maternal educational), Rural/Urban: not specified. Sample characteristics compared favourably to 1991 U.S. census information (Bureau of the Census, 1991). |

|Summary Prepared By: Eleanor Stewart November 2007 |

|Test Description/Overview: |

| |

|Theory: The OWLS is based on Elizabeth Carrow’s previous work in the area of language test development. Language knowledge refers to the structure of language that includes content and form |

|while language performance refers to “internal systems the language user employs to process language” (Carrow-Woolfolk, 1995, p. 7). The author proposes that these two dimensions together |

|account for verbal communication. Language processing theory, according to Carrow, “separates the four major processes by the requirements of their perspective processing systems” (p. 12). The |

|OWLS is organized around the four structural categories she proposes (lexical, syntactic, pragmatic, and supralinguistic) and the processing that includes listening comprehension, oral |

|expression, written expression, and reading. Her model resembles Lois Bloom’s content/form/use model familiar to speech pathologists and developmental linguists. |

|Purpose of Test: The purpose of this test is to assess language knowledge and processing skills. The author identifies three purposes which are to identify, intervene, and monitor progress, and|

|to use in research. The author states that identifying language problems will assist in “addressing potential academic difficulties” (Carrow-Woolfolk, 1995, p. 3). According to the author, |

|growth across time from preschool through high school and into post-secondary education can be tracked. Also, due to the age range covered, the author claims that it is useful in research |

|studies. |

|Areas Tested: Listening Comprehension and Oral Expression |

|Who can Administer: School psychologists, speech pathologists, educational diagnosticians, early childhood specialists and other professionals with graduate level training in testing and |

|interpretation may administer this test. Administration Time: Table 1.1 provides average administration time in minutes for the normative sample. The Buros reviewers estimated 15 to 40 minutes |

|overall with 5 to 15 minutes for the Listening Comprehension and 10 to 25 for Oral Expression (Carpenter & Malcolm, 2001). |

|Test Administration (General and Subtests): Start points by age are given for the Oral Expression Scale. The examiner begins each subtest with an example; with up to three examples given. A |

|lower test item can be chosen if student ability is in question. No repetitions are allowed on the Listening Comprehension Scale whereas one repetition is allowed on the Oral Expression Scale. |

|Prompting, allowed on the Oral Expression scale, is outlined in the manual in the detailed section on Item-by-item Scoring Rules. This section addresses the specifics of each test item in terms |

|of scoring rule, preferred and acceptable responses, and errors (grammatical, semantic, pragmatic). Basal and ceiling rules apply and differ between the two subtests. Overall, administration is |

|straightforward and easy to follow. |

|Test Interpretation: Chapter 7, “Determination and Interpretation of Normative Scores”, provides instruction for converting raw scores to standard scores, calculating confidence intervals and |

|other standardized scores, dealing with 0 scores, and interpretation of each type of standardized score. Interpretation of the OWLS is limited to the use of standardized scores. Appendix C, |

|“Grammar and Usage Guidelines”, provides a useful glossary and introduction to common grammatical mistakes that the examiner may encounter (e.g., faulty agreement between subject and verb). |

|Standardization: Age equivalent scores called test-age equivalents Grade equivalent scores Percentiles Standard scores Stanines Other (Please Specify) Listening Comprehension, Oral |

|Expression, Oral Composite. Normal Curve Equivalents (NCE) are provided as some agencies and legislative requirements exist which mandate their use. Mean scaled scores for both the Listening |

|Comprehension and Oral Expression Scales were 100 with a standard deviation of 15. SEMs (68, 90, and 95% levels) and Confidence intervals are available and presented by age on page 123 |

|(Carrow-Woolfolk, 1995). Oral Composite had a SEM of 4, Listening 6.1 and Oral Expression 5.4 standard score points across age ranges. No mention or caution regarding the use of age equivalent |

|scores was found in the manual. |

|Reliability: |

|Internal consistency of items: Mean reliability coefficients (using Fisher’s z transformation) across subtests and composites were high: .84, .87, and .91 respectively. |

|Test-retest: A sample of students, ages 4 years, 0 months through 5 years, 11months (n=50), 8 years, 0 months through 10 years, 11 months (n=54), and 16 years, 0 months through 18 years, |

|11months n=33), were randomly selected for retesting and sample characteristics were provided. The interval median between testing was eight weeks. Corrected coefficients range from .73 to .89. |

|(Gain is indicated by subtracting second minus first testing). |

|Inter-rater: 96 students in ages 3 to 5, 6 to 8, 9 to 12 and 13 to 21 years were used. Coefficients ranged from .93 to .99 with a mean .95. A second analysis was conducted with Multi-Faceted |

|Rasch Model (FACETS). Results demonstrated that five items were found to be problematic, with three caused by rater errors (recording mistakes). The remaining two items received lower scores and|

|on this basis the manual was clarified and examples were added. |

|Validity: |

|Content: The author refers readers to the material presented in the introduction regarding the model and descriptions of constructs (Chapters 2 and 3). Comment: No other information about |

|content is provided whereas other newer tests include research to support validity claims. |

|Criterion Prediction Validity: |

|Language: Results: PPVT-R .75, TACL-R Total Score .78, and CELF-R Total Language .91. |

|Cognitive ability: Results: K-ABC Achievement Score .82, WISC-III Verbal IQ .74, and K-BIT Vocabulary subtest .76. Nonverbal ability correlations ranged: .70, .69, and .65 respectively for the |

|appropriate subtests on each test. Global score correlations were as follows: .76, .73, and .75. |

|Academic achievement: Results indicated “positive correlations between the Oral Composite and the K-TEA, PIAT-R and WRMT-R, suggesting dependence on language in academic tasks” (Carrow-Woolfolk,|

|1995, p. 134). This statement is confirmed by the data which show the highest correlation to be with WRMT-R Word Comprehension (.88) and lowest correlation with K-TEA Mathematics Composite |

|(.43). |

|Clinical validity for clinical groups (speech impaired, language delayed, language impaired, mentally handicapped, learning disabled (both reading specific and undifferentiated), hearing |

|impaired, and Chapter One (children in special reading programs) all evidenced expected performance differences. |

|Construct Identification Validity: Evidence is provided for two types of construct validity, i.e., developmental progression of scores and intercorrelations of the scales. In terms of |

|developmental progression, age differentiation is evidenced by increases in the raw scores with steeper increases in the earlier years. Intercorrelations: Moderate correlations are evidenced |

|between the Listening Comprehension Scale and the Oral Expression Scale with a range from .54 to .77 and with a mean of .70. Sufficient correlation evidence is reported that each scale is |

|tapping skills that are unique but nonetheless related so that support is given to the overall Oral Composite Score. |

|Differential Item Functioning: not reported. |

|Summary/Conclusions/Observations: This test is useful for a wide range of ages using common task formats. While relatively easy to administer, OWLS’ scoring schema is complex and probably easier|

|for someone with a linguistics background to master. The two Buros reviewers differ on many aspects of their reviews (Carpenter & Malcolm, 2001). |

|Clinical/Diagnostic Usefulness: It’s an older test now, superseded by such measures as the CELF-4 which is more comprehensive, current, and suavely linked to current U.S. education requirements |

|and curriculum. I think that few SLPs will use OWLs but others who must make decisions regarding reading abilities may still find this test useful with the caveats described in this review and |

|that of the Buros reviewers. |

References

Carpenter, C., & Malcolm, K. (2001). Test review of Woodcock Reading Mastery Test-Revised 1998 Normative Update. In B.S. Plake and J.C. Impara (Eds.), The fourteenth mental measurements yearbook (pp. 860-864). Lincoln, NE: Buros Institute of Mental Measurements.

Carrow-Woolfolk, E. (1995). Manual: Listening Comprehension and Oral Expression. Circle Pines, MN: American Guidance Service, Inc.

Current Population Survey, March 1991 [machine-readable data file]. (1991). Washington, DC: Bureau of the Census (Producer and Distributor).

To cite this document:

Hayward, D. V., Stewart, G. E., Phillips, L. M., Norris, S. P., & Lovell, M. A. (2008). At-a-glance test review: Oral and written language scales: Listening comprehension and oral expression (OWLS). Language, Phonological Awareness, and Reading Test Directory (pp. 1-4). Edmonton, AB: Canadian Centre for Research on Literacy. Retrieved [insert date] from .

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download