Name of Test:
Test Review: Brigance Diagnostic Comprehensive Inventory of Basic Skills-Revised (CIBS-R)
|Name of Test: Brigance Diagnostic Comprehensive Inventory of Basic Skills-Revised |
|Author(s): Albert H. Brigance (CIBS-R Standardization and Validation Manual by Frances Page Glascoe) |
|Publisher/Year: Curriculum Associates, Inc. 1976, 1983, 1999 |
|Forms: Forms A (pre-test) and B (post-test) |
|Age Range: kindergarten through Grade 6, ages 5 through 12 years |
|Norming Sample |
| |
|A. Brigance undertook revisions to items and skill sequences in 1996. The national study was undertaken between 1997 and 1998 to “define the range of students’ performance across grade, race, |
|and socioeconomic variables, and to generate a range of standard scores…” (Brigance, 1999, p. 27). Sampling procedures are described and efforts were made to ensure the participation of a range |
|of students reflecting U.S. demographics from the 1998 U.S. Census Bureau figures. Examiners were classroom teachers at each site who “assessed their own students in their own classrooms. This |
|ensured that standardization conditions reflected how the CIBS-R is most often used-by teachers working in classroom settings. Three separate samples of students were collected at distinct |
|points during the school year” during the fall, winter, and spring” (Brigance, 1999, p. 28). |
| |
|Total Number: 1, 121 |
|Number and Age: The sample consisted of 1 121 children from ages 5 years (n=183), 6 years (n=102), 7 years (n=153), 8 years (n=189), 9 years (193), 10 years (n=158), and 11 to 12 years (n=143). |
|Location: Norming samples were collected at 6 sites in four regions of the United States (North, Central, South, and West). |
|Demographics: Demographics were reported by four geographic regions, gender, race/ethnicity, residence (urban-suburban/rural), ethnicity, family income, educational attainment of parents, and |
|disability. The sample characteristics were compared to the U.S. census information. |
|Rural/Urban: yes |
|SES: SES was determined by family income and by “participation of subjects or siblings in Federal School Lunch Program” (Brigance, 1999, p. 29). |
|Other (Please Specify): no information |
|Summary Prepared By (Name and Date): Eleanor Stewart, October and November 2007 |
|Test Description/Overview |
| |
|The Brigance Diagnostic Comprehensive Inventory of Basic Skills-Revised (CIBS-R) provides tools for assessing students’ skills from kindergarten through the elementary and middle school years. |
|It also allows teachers to identify children who may need further diagnostic assessment or may be eligible for special programs. Students’ strengths and weaknesses as well as mastery can be |
|profiled. |
| |
|The new edition, CIBS-R, includes norm-referenced scores for certain subtests thus permitting interpretation and comparison with other measures of achievement using standard scores. |
| |
|The test kit consists of an examiner’s “Technical and Validation” Manual (by Glascoe, 1999), teacher’s class record book, 40 page student record, scoring sheets, CD conversion program, and a |
|3-inch binder compilation of tests across eight areas. |
| |
|Purpose of Test: The purpose of the test is to assess children’s school-based skills in the early elementary and middle grades. CIBS-R identifies strengths and weaknesses in an |
|objectives-referenced format and can be used to refer eligible students for special programs. |
| |
|Areas Tested: CIBS-R includes a total of 154 assessments across eight areas which include: Readiness, Speech, Listening, Reading, Spelling, Writing, Research, Study Skills, and Mathematics. |
| |
|Subtests are grouped as follows to create composites: Basic Reading Composite, Reading Comprehension Composite, Math Composite, Written Expression Composite, Listening Comprehension Indicator. |
|Information Processing is addressed as Math Information Processing, Writing Information Processing, and Reading Information Processing. |
| |
|Oral Language Vocabulary Grammar Narratives Other (Please Specify) |
|Print Knowledge Environmental Print Alphabet Other (Please Specify) |
|Phonological Awareness Segmenting, divides words into syllables Blending Elision Rhyming Other identifies sound in spoken word, states word having same initial/final sounds, substitutes |
|initial consonant sounds |
|Reading Single Word Reading/Decoding Comprehension-passage |
|Spelling Other (Please Specify) |
|Writing Letter Formation Capitalization Punctuation Conventional Structures Word Choice Details Other (Please Specify) sentence writing, |
|Listening Lexical Syntactic Other (Please Specify) sentence |
|Comment: The binder containing the 154 assessments is extensive. The organization of skills was difficult to follow as some skills were grouped in unfamiliar ways. For example, Assessment of |
|Speech contained both speech and language skills (e.g., #B-2 Responds to Picture, #B-3 Articulates Initial Sounds of Words). Later, in the section, “Word Analysis”, I found many items that I |
|would consider to relate to phonological awareness (e.g., #G-3 “Identifies Initial Consonants in Spoken Words”). |
| |
|Who can Administer: Teachers and assistants (with supervision) may administer this test. |
| |
|Administration Time: The testing time varies according to the number and selection of tests administered but is estimated to be approximately one hour however, assessment can be conducted over |
|several sessions/days. The technical manual states, “If administering all standardized assessments, the Readiness battery requires approximately 1 ¼ hours and the CIBS-R first-grade through |
|sixth-grade battery takes 45 to 60 minutes” (Brigance, 1999, p. 9). Comment: In my experience, individual administration is time consuming for classroom teachers, especially in early education |
|settings where all students demonstrate delays. Nonetheless, some school districts require that the CIBS-R be administered. |
|Test Administration (General and Subtests): |
| |
|The examiner can select specific tests to administer in any order. A quick screener is available for grades one through six consisting of: Comprehends Passages, Sentence Writing, and |
|Computational Skills. The author states, “Each is a strong predictor of overall success in school, and all tap critical school skills” (Brigance, 1999, p. 2). These screening tests can also be |
|timed so that scores for “Reading Information Processing, Writing Information Processing, and Math Information Processing” are available. Standardized administration is urged only when |
|considering using the norm-referenced scores. |
| |
|General instructions are found in the manual, in Chapter 2, and in the binder preceding each test. Entry points are identified and usually fall two grades below the student’s current grade. |
|Basals are established with two to five successfully completed items. Ceilings or discontinuation points are similarly identified in each test. The author states that the basal and ceilings can |
|be used to determine the instructional range. Specific instructions for each test are outlined in the test binder. |
| |
|Comment from Buros reviewer: “Overall, the process of determining basal and ceiling levels and recording student responses seems cumbersome and unnecessarily susceptible to error. In addition to|
|the need to recall and transfer correct scoring marks into the record book, the examiner must also keep in mind the basal and ceiling demarcations, which vary from assessment to assessment…No |
|information is provided to validate the use of the basal and ceiling cutoffs” (Cizek & McLellan, 2001, p. 173). |
| |
|Test Interpretation: |
| |
|The CIBS-R generates a list of objectives-referenced information, however, this new revision allows for norm-referenced information to be obtained as well. Instructions for using the |
|standardized scores following a standardized administration are found in the technical manual (Glascoe, 1999, pp. 14-18). Alternate form use is described. A section defining and interpreting raw|
|scores, percentiles, quotients, age, and grade equivalents follows. The CD can assist with scoring. A sample educational evaluation is presented in the final pages of the chapter on |
|administration in the technical manual. |
| |
|Comment from Buros reviewer: “Two examples of this reporting are included in the manual. One example shows a norm-referenced report with instructional ranges, percentile ranks, and quotients for|
|several areas. The other is a model of a narrative description of a student’s performance, developed by the test administrator, that summarizes the skill levels and adds commentary on |
|behavioural observations. The model narrative is nearly four pages in length, and is thorough in its explanation of results and recommendations for remediation strategies and materials. On the |
|one hand, it is encouraging to see this type of narrative synthesis included as truly useful information that parents and teachers receive following administration of a standardized test. On the|
|other hand, there are at least two causes for serious concern related to this information. First, the behavioral and other information summarized in the report could only come from the person |
|who actually administered the CIBS-R. Thus, this type of reporting would be precluded in situations where a teaching assistant conducted the administration, as such a person would ordinarily |
|lack the training necessary to develop the narrative. Second, and more importantly, the conclusions in the model report seem to go way beyond the data collected by the assessment….Because this |
|narrative information might likely be more salient to parents and teachers than the quantitative scores, it is essential that it be based on more careful measures of those behaviors” (Cizek & |
|McLellan, 2001, p. 173). |
|Standardization: Age equivalent scores Grade equivalent scores Percentiles Standard scores Stanines |
|Other (Please Specify) Quotients –scale with mean of 100 and SD of 15 |
|A comment from the Buros reviewer regarding the supplemental composite scores addresses information processing in the areas of math, writing, and reading: “All three composite scores are based |
|on production rates and appear to have been named for user appeal rather than actual utility. Test users are strongly encouraged to use caution when judging the validity of these three |
|composites” (Cizek & McLellan, 2001, p. 176). |
|Reliability: |
| |
|Internal consistency of items: This information is presented last in the series of reliability sections (Glascoe, 1999, p. 35). Because the CIBS-R is a criterion-referenced measure, Guttman |
|scalability coefficients (Guttman Lambda Coefficient [R]) were used. Glascoe states, “the coefficients serve as an indicator that each assessment and its items are hierarchical, unidimensional, |
|and homogenous measures of academic and readiness skills” (p. 35). The resulting coefficients are lower than those expected with norm-referenced tests (.80 and above). Nonetheless, Glascoe |
|claims that the coefficients are still sufficient for eliminating test items that lack discriminating power. She further states that the resulting R values are high (most are at or above .70). |
|SEMs are also presented in the section on internal consistency though these more properly belong in the section on reliability. SEMs appear to have a wide range across the different tests (.3 to|
|8.5). |
| |
|Comment from Buros reviewer: “Users should also be cautioned about the inaccurate definition of standard errors which it is claimed, can be ‘added to and subtracted from each student’s raw score|
|in order to provide a theoretical, error-free indicatory of true performance’( p. 35). Additionally, the suggested interpretations of the lower SEM as the student’s independent skill level and |
|the upper SEM as a failure level are unsupported” (Cizek & McLellan, 2001, p. 174). |
| |
|Test-retest: Forty-one (41) students, in kindergarten through grade six, were retested after a two-week interval, with the second administration being performed by an educational diagnostician. |
|Results are presented in Table 4.1, “Test-Retest Reliability of CIBS-R First Grade through Sixth Grade Assessments”, and Table 4.2, “Test-Retest Reliability of CIBS-R Readiness Assessments”. All|
|correlations reported exceed .80 with only Listening Comprehension Indicator at .79, Writing Information Processing at .63, Reading Information Processing F-2 timed Comprehends Passages at .78, |
|and Prints personal data at .73. |
| |
|Inter-rater: Data for calculating inter-rater reliability was taken from the same set as those used for test-retest reported above. The authors state, “Since the 1998 study involved two |
|different examiners (classroom teachers administered the CIBS-R at time one, and an educational examiner administered the CIBS-R at time two), the previously reported test-retest reliability |
|figures also reflect inter-rater reliability” (Glascoe, 1999, p. 34). Since the coefficients were high, inter-rater reliability demonstrates that the “directions are sufficiently clear to enable|
|different examiners to obtain virtually identical results” (p. 34). |
| |
|Other (Please Specify): Equivalence of forms: The procedure for determining equivalence of forms included analysis of covariance. The author noted that the two samples used (n=492 children |
|administered Form A and n=446 children for Form B) differed on key characteristics related to school achievement, the univariate F statistic, and corresponding alpha. No significant difference |
|was found between the two forms (at p< .05). Therefore, only a single set of norms was necessary to develop (p. 30). |
| |
|Alternate forms reliability: The alternate forms section is unclear and no reliable interpretation is possible. |
|Validity: |
| |
|Content: The author claims that “there is abundant support for the content validity of the CIBS-R and for its applicability in educational settings” (Glascoe, 1999, p. 39). The author continues |
|with the claim that the test author, A. Brigance, read the literature in the areas addressed by the CIBS-R and that he consulted with educators throughout the U.S. on item selection. Comment: |
|While this approach has some merit, it is not sufficient by test development standards and in comparison to other tests I have reviewed. Indeed, the Buros reviewer noted the same in his review |
|where he stated, “…in an objectives-referenced instrument, content validity evidence would seem to be of utmost importance. For some CIBS-R tests (called validated assessments), the test binder |
|provides references to textbook series and grade levels in which the skill is normally encountered; references to scholarly references are also occasionally provided. For other tests, content |
|validity information is not included. Overall, the level of content validity evidence provided falls short of ‘abundant’ as claimed in the technical manual” (Cizek & McLellan, 2001, p. 174). |
| |
|Criterion Prediction Validity: |
|Correlations are presented in table format, with brief commentary from the author in the text, for the following: |
|PIAT and WRAT in special needs groups. |
|Partial correlations (adjusted for students’ ages) between reading and written language tests: Iowa Test of Basic Skills, Stanford Achievement Test, and California Achievement Test for grades |
|one through six. |
|Partial correlations (adjusted for students’ ages) between math, general information, and the total battery scores of Iowa Test of Basic Skills, Stanford Achievement Test, and California |
|Achievement Test for grades one through six. |
|Woodcock-Johnson Psycho-Educational Battery: Tests of Achievement and the CIBS-R Readiness Assessments, restricted ranges. |
|Teacher ratings and students’ performance on Readiness assessments demonstrating “close agreement” (Glascoe, 1999, p. 49). |
|WISC-III, Woodcock-Johnson Psycho-Educational Battery: Tests of Achievement, Weschler Individual Achievement Test, DAB-2, Kaufman Test of Educational Achievement, or Peabody Individual |
|Achievement Test were administered by psychologists and educational diagnosticians responsible for testing children referred for special education services. |
| |
|Construct Identification Validity: |
|Age differentiation is demonstrated by increasing median raw scores and standard deviations for students in grades one through six on the CIBS-R Assessments and Composites. Median raw scores for|
|Readiness for children in kindergarten show progression from early, midyear, and yearend. Numbers and sample characteristics are not provided nor is there information about subgroups of |
|interest. In terms of test and composite correlations, the author provides commentary on the information provided in Tables 5-3 to 5.6. These tables present correlations as follows: |
|Intercorrelations Among Readiness Assessments and Composites, Assessments and Composites for Grades One through Six, Readiness Assessments with Age and Grade, and CIBS-R Grades One through Six |
|Assessments with Age and Grade. Again, the specifics of the sample are not stated here. Comment: Overall, the author’s discussion of this data is scant, leaving the reader to interpret. I found |
|the comments by the Buros reviewer illuminating. The Buros reviewer states, “Some construct validity evidence is also provided, although it does not always confirm the intended use of the test. |
|A table of CIBS-R intercorrelations is provided; these all are fairly high and positive. However, scores on the listening vocabulary comprehension grade placement test correlated equally well |
|with scores on reading, math, and written expression composites; spelling grade-placement test scores correlate more strongly with the math composite than with reading comprehension. Other |
|correlations reveal that scores on the CIBS-R are related to age and grade level, with higher scores associated with advanced age and grade; scores on the CIBS also correlate moderately to |
|fairly strongly with the Weschler Intelligence Scale for Children-Third Edition (WISC-III) full scale IQ scores. Base on the preponderance of the construct validity evidence provided, a |
|reasonable case could be made that the CIBS-R functions nearly as effectively as a measure of general cognitive ability as it does a measure of discrete skill mastery” (Cizek & McLellan, 2001, |
|p. 174). |
| |
|Differential Item Functioning: not provided |
|Other (Please Specify): none |
|Summary/Conclusions/Observations: |
|The sheer size of the binder is daunting. There is a lot of text material to wade through in order to be ready to administer the assessments. The examiner must use the specific information |
|outlined in each assessment section especially when considering standardized administration. |
| |
|Comments from Buros reviewers: |
|“Until further developmental work remedies some of these concerns, the CIBS-R is probably best used as an adjunct information-gathering tool. It is possible that the CIBS-R attempts to do too |
|much by attempting to satisfy the needs of those interested in both objectives- and norm-referenced assessment” (Cizek & McLellan, 2001, p. 175). |
|“Continued use of the criterion-referenced application of this measure is strongly recommended, but the standardized version should be used with caution when determining eligibility of students |
|for special education services (Cizek & McLellan, 2001, p. 176). |
|Clinical/Diagnostic Usefulness: |
|Based on what I learned from reviewing the CIBS-R, I would be inclined not to recommend it. However, I think that it deserves our attention because it is so widely used. I suspect that |
|educational decisions might also be structured around the results. Perhaps in this way, the CIBS-R may be profoundly influential. I encountered use of the test in early education where teachers |
|were required to complete certain subtests. Given that the children in the program were already identified using stronger diagnostic tests, I felt that administration of the CIBS-R was wasting |
|the teacher’s valuable time and not providing information that was particularly useful in that context, however; district requirements prevailed. |
References
Brigance, A. H. (1999). Brigance diagnostic comprehensive inventory of basic skills – revised (CIBS-R). North Billerica, MA: Curriculum Associates.
Cizek, G. J. & McLellan, M. J. (2001). Review of the Brigance Diagnostic Comprehensive Inventory of Basic Skills - Revised. In B. S. Plake & J. C. Impara (Eds.), The fourteenth mental measurements yearbook (pp. 172-175). Lincoln, NE: Buros Institute of Mental Measurements.
Glascoe, F. P. (1999). Standardization and validation manual for the Brigance Comprehensive Inventory of Basic Skills-Revised. North Billerica, MA: Curriculum Associates.
To cite this document:
Hayward, D. V., Stewart, G. E., Phillips, L. M., Norris, S. P. , & Lovell, M. A. (2008). Test review: Brigance diagnostic comprehensive inventory of basic skills-revised (CIBS-R). Language, Phonological Awareness, and Reading Test Directory (pp. 1-8). Edmonton, AB: Canadian Centre for Research on Literacy. Retrieved [insert date] from .
................
................
In order to avoid copyright disputes, this page is only a partial summary.
To fulfill the demand for quickly locating and searching documents.
It is intelligent file search solution for home and business.
Related searches
- name of different countries
- name of different countries in the world
- find name of song with full lyrics
- name of technology
- find the name of a song
- what is the name of symbol
- find name of song by words
- name of ministers in ghana
- name of profession
- name of scientists
- write the name of the element i
- name of elements and their symbols