1



An Examination of the Social-Communication Test of the AEPS,

Second Edition

Teresa Brown

Kent State University

Abstract

The purpose of the study was to gain perceptions of the items contained in the Social-Communication Area of the Assessment, Evaluation, and, Programming System (AEPS), Second Edition (Bricker, 2002), for use with children who speak a nonstandard dialect of English. A nonstandard dialect is defined as an English dialect that is not used as the primary means of instructions in schools (Cheatham, Armstrong, & Santos, 2009). Examples of nonstandard dialects include African American English, Hawaiian Creole, Hispanic English, North Midland Dialect, and Southern Mountain English. A basic interpretative approach and focus group methodology was used to gather data and analyze the findings. A sample of content experts and practitioners were assembled to review and discuss the items of the Social-Communication Test of the AEPS. The purpose of the study was to use the findings to plan a subsequent study that will be used to provide feedback and recommendations to the developers of the AEPS and add to the existing literature base regarding bias in curriculum-based assessments for children who speak a nonstandard dialect of English. The primary research question that will be answered through this project is: What are experts perceptions regarding using the AEPS to assess the social-communication of children who speak a nonstandard dialect of English?

An Examination of the Social-Communication Test of the AEPS, Second Edition

Introduction

The present study was a basic interpretive focus group study of experts and practitioners. I came to the study to gain perceptions of experts and practitioners regarding using the Assessment, Evaluation and Programming System, Second Edition (Bricker, 2002), to assess the social-communication of children who speak a nonstandard dialect of English. I asked the question in this way because my primary focus was to understand the potential bias of the test items for children who speak a nonstandard dialect of English.

My question was prompted by my experience as an Early Childhood Intervention Specialist in an urban school district outside of Cleveland, Ohio. Many of the children and families that I have had the privilege of working with speak a nonstandard dialect of English, primarily African American English (AAE). Majority of the children that I serve are receiving special education services due to a speech or language delay or disability. After reviewing several different early childhood assessment measures I cannot help but ask myself if children are being over-identified for special education services due to cultural factors, in particular dialectal differences. When reviewing the items in the speech test and considering the characteristics of African American English, it is difficult for me to determine how information gained from the test can be an accurate representation of the child’s speech and language ability, without taking cultural and linguistic diversity into consideration.

Practitioners are faced with the challenge of preserving a child’s home culture and language, while recognizing the impact of culture and language on the assessment process. Assessment results cannot be considered valid unless cultural and linguistic diversity is taken into consideration. The current literature on assessing children who are culturally diverse supports my assumption of an over-representation of children in special education, who speak a nonstandard dialect of English (Skiba, Simmons, Gibb, & Rousch, 2008; Harper, Braithwaite, & LaGrange, 1998).

I chose for the focus group to review the items of the Social-Communication test of the Assessment, Evaluation, and Programming System (AEPS) for several reasons. I have used the AEPS in my practice for over ten years. I have received extensive training on the use of the AEPS and I have been a provider of professional development and technical assistance to users of the AEPS for several years. I personally believe the AEPS is an effective tool for determining eligibility, planning instruction, and monitoring progress. The AEPS has established reliability and validity for assessing and developing goals for young children. The AEPS is in the process of being revised to reflect current changes in assessment standards and early learning content standards. Through the perceptions gained during the focus group I would like to plan a subsequent study that will be used to provide feedback and recommendations to the developers of the test and add to the existing literature base regarding bias in curriculum-based assessments for children who speak a nonstandard dialect of English.

Context

Children of all ages come to school having learned the language of their families. It is the language of their communities and is influenced by their culture. For many children this means they speak a nonstandard dialect of English. A nonstandard dialect can be defined as an English dialect that is not used as the primary means of instructions in schools (Cheatham, Armstrong, & Santos, 2009). Nonstandard dialect may be associated with a certain cultural or regional group (e.g., Chicanos, midwesterners). Examples of nonstandard dialects include African American English, Hawaiian Creole, Hispanic English, North Midland Dialect, and Southern Mountain English. A nonstandard dialect has its own vocabulary, grammar, and syntax; and it may be spoken using a variety of accents. Table 1 provides an example of the vocabulary and syntax of African American English. Standard American English (SAE) is considered to have the most neutral accent and is primarily spoken in Midland states. SAE is often considered to be based on the speech of members of society with the greatest prestige and power. Studies have shown that very few people in the United States actually speak SAE, however it is considered the standard for proper speech, especially in schools.

|Table 1 |

|Features of African American English |

| |Deletion of “r” sounds (e.g., “mo” for “more”) |

| |Alteration of “th” sound (e.g., “dat” for “that”) |

| |Deletion of verb tense “to be” or substitution of “be” for “is” (e.g., “he be playing”) |

| |Final consonant deletion (e.g., “fas” for “fast”) |

| |Misuse of personal pronouns (e.g., “they go”) |

| |Use of double negative for emphasis (e.g., “I ain’t got no toys”) |

| |Reversal of word meanings (e.g., “bad” means “good”) |

Source: Harper, Braithwaite, & LaGrange, 1998

Children are often penalized or corrected by educators for using a nonstandard dialect. Being scrutinized over the way they talk may cause the child to withdraw and become anxious about speaking. Devaluing a child’s speech can affect the child’s cultural identity. Professional organizations such as the National Council on Teachers of English (NCTE, 2005), The National Association for the Education of Young Children (NAEYC, 1995) and Teachers of English to Speakers of Other Languages (TESOL, 1997) have set standards for the inclusion of children’s home languages and dialects in education. Research has supported the importance of incorporating nonstandard dialects into education to foster and support learning (Dyson & Smitherman, 2009). Further, research has also shown that when children are assessed using familiar dialect or vernacular their performance improves, as opposed to when the items are presented in SAE (Galagan, 1985).

Assessing the speech and language of children who speak a nonstandard dialect leads to many challenges. During the assessment process practitioners may have difficulty determining whether a child’s communication is significant of a disability or a communication difference (Cheatham, Armstrong, & Santos, 2009). For example, Strand B of the AEPS Social-Communication test examines the child’s production of words, phrases, and sentences. Item 1.2 measures the child’s use of the copula verb “to be”. The criteria states “child uses appropriate form of the verb “to be” to link a subject noun to a predicate” (Bricker, 2002). A characteristic of African American English is the substitution of “be” for “is” or complete deletion of the verb tense “to be” (e.g., “she be outside” or “she outside” as opposed to “she is outside”). When assessing the child using the AEPS, she would be scored as not using the verb form “correctly” and the overall test score will be reduced. Due to the difficulties that occur during the assessment process, children from culturally and linguistically diverse backgrounds are over represented in special education and children who speak a nonstandard dialect are often diagnosed as having a disability (Seymour, 2004, Skiba et al., 2008).

Through the proposed study, I set out to gain perceptions regarding the bias of items contained in the Social-Communication test of the AEPS 2nd Edition, for children who speak a nonstandard dialect of English. The AEPS has been chosen for the study because of the assessment’s extensive evidence base validating its use as an effective authentic assessment for preschool children with and without disabilities. The AEPS is a curriculum-based measure that is used for (a) determining children’s strengths and emerging skills, (b) planning and individualizing intervention, and (c) assessing children’s progress over time (Pretti-Frontczak, 2002). The AEPS covers six domains of development: fine motor, gross motor, adaptive, cognitive, social-communication, and social.

Social-communication- refers to the ways information can be perceived, transmitted and understood, as well as the impact those ways will have on a society (Landa, 2005). The AEPS Social-Communication test examines a child’s receptive understanding of language, as well as the child’s form (e.g,. uses subject pronouns, uses prepositions), content (e.g., asks questions for clarification), and use of language (e.g, uses words, phrases, or sentences to inform). Items in the AEPS test are scored with a 2, 1, or 0. When a child consistently meets the specified criteria the item is scored a 2. When the child inconsistently meets the criteria the items is scored a 1. When the child does not meet the crieterion the item is scored a 0. In addition scoring notes are used to record additonal information about a child’s performance on the test items. The scoring notes are used to record information about assistance provided to the child, modifications or adaptations of test items, child behavior, and the quality of child performance (Bricker, 2002).

The use of curriculum-based assessments has been considered a viable alternative for assessing the skills of children who speak a nonstandard dialect of English. The reason for this being, that the items in the test have not been standardized. In a curriculum-based assessment, such as the AEPS each of the test items has a stated criteria and the assessor measures how well the child meets the stated criteria. Information is gathered during authentic situations, by familiar people, and in familiar settings (e.g., home, school, playground). When using standardized tests the items have been normed using answers provided by a specific sample of children. If the child does not produce the answer as scripted the child may not receive credit for the answer. For example, if a child is asked a question and the child says, “I ain’t got none”, when the answer to the question is “I do not have any”, he or she would not receive credit for their response. Even though curriculum based assessments are the preferred method for assessing children who are culturally and linguistically diverse, the assessments lack reliability and validity evidence to support their use for specific populations (Galagan, 1985).

Personal Interest

As stated previously, I have a strong personal interest in both the AEPS and improving services for children who are culturally diverse. I have been a preschool special education teacher for 10 years where I have used the AEPS for eligibility determination, program planning, and progress monitoring. In addition, I have provided training and technical assistance to programs and individuals who use the AEPS for the past seven years. In my experience as a teacher, I have observed majority of the students serviced in preschool special education to have some form of a speech/language delay or disability. I believe finding an assessment that provides a comprehensive picture of a child's speech/language development is important for evaluation, planning instruction and intervention, and monitoring child progress.

For the past 6 years I have been teaching in an urban school district with at least 75% of my caseload consisting of children who are African American. When I first began teaching in the urban setting I have to admit I was in a bit of a "culture shock". I was unfamiliar with how culture influenced how children learn and my interactions with parents. I also began to notice the dialectal differences among my students. Some of the students and families speak a dialect of English referred to as African American English or AAE. Some things I noticed were differences in the use of pronouns, plurals, and some verb tensing. When assessing my students I realized that the students frequently scored poorly on the speech test because they did not complete the item as stated in the criteria. It was not clear to me if the child’s score was due to disability or dialectal differences. It is my opinion that there is an overrepresentation of African American children in preschool special education due to differences in dialect. Through the use of focus groups I hoped to provide evidence through the perceptions of experts and practitioners to support the notion that the items in the AEPS social communication test are not bias for preschool children with dialectal differences.

Research Overview

Research Goals

Through the study I set out to examine how experts and practitioners perceive the items of the Social-Communication test of the AEPS. In particular I wanted to examine experts' and practitioners' perceptions of bias of the test items as related to children who speak a nonstandard dialect of English. The information gained through this pilot study will be used to inform a subsequent study that will examine the content validity of the test. In addition, the study will add to the existing literature base regarding bias in curriculum-based assessments for children who speak a nonstandard dialect of English.

Research Question

The research question to be answered through the study is: What are expert’s and practitioner’s perceptions regarding using the AEPS to assess the social-communication of children who speak a nonstandard dialect of English?

Methods

The study uses an interpretive research design, which is influenced by constructivism. According to Schram (2006), the aim of an interpretivist researcher is to “understand the complex and constructed reality from the point of view of those who live in it” (p.44). I set out to understand the reality of practitioners who work with a diverse population of children and use the AEPS in their practice and experts that provide training to users of the AEPS.

Focus groups will be used to examine the perceptions of experts and practitioners regarding using the AEPS to assess the social-communication of children who speak a nonstandard dialect of English. The two groups will consist of practitioner users of the AEPS and AEPS experts. Following the focus group data will be analyzed using qualitative methods for recurring themes and divergent perspectives. Following analysis, the information will be used to inform a subsequent larger scale study.

Overall approach and rationale.

A multiple category focus group design will be used to gather perceptions from two different audiences: practitioner users of the AEPS and AEPS experts. The use of the multiple category design allows the researcher to make comparisons from within one group (e.g., expert to expert) and from one group to another (e.g., expert to practitioner) (Krueger & Casey, 2000). According to Krueger and Casey (2000), focus groups can be useful for product development. The ideas generated during the discussion can be used to learn how the participant “sees, understands, and values a particular topic” (p.15). In the study, the focus groups were used to gain perceptions regarding using the AEPS to assess the social-communication of children who speak a nonstandard dialect of English. Further, according to Hesse-Biber & Leavy (2006), focus groups are an important method to gain exploratory qualitative data that can be used to shape future research. The suggested use of focus groups is aligned with goals of the proposed research.

Participants.

Two focus groups were assembled to discuss the Social-Communication test of the AEPS, using purposeful and convenience sampling methods. Purposeful sampling is used to select participants who are most likely able to answer the research question (Marshall, 1996). The groups were strategically planned to avoid mixing people who may feel they have different levels of expertise or power related to assessment, language development, or the AEPS. The focus groups consisted of 4 practitioners from the school where I currently work and 3 AEPS experts. Table 2 describes the characteristics of each of the participants. Kruger and Casey (2009) recommend that non-commercial focus groups consist of 5-8 participants. I chose to use a smaller sample primarily due to my novice transcription skills. To insure an accurate transcription of the interview I felt a smaller sample size was necessary. I also wanted to test out my moderating skills with a small sample.

The initial focus group was comprised of practitioner users of the AEPS who service children with dialectal differences. Participants used their experiences in the classroom to share their perceptions of the AEPS test items. Findings from the first focus group were used to sensitize the researcher to other areas to explore in order to gain more depth on the topic. The second focus group was comprised of AEPS experts. Although the AEPS experts may not have experience providing services to children who speak a nonstandard dialect of English, they have extensive experience with the AEPS and were able to provide their perceptions of bias in the test items and overall use of the AEPS.

|Table 1 |

|Characteristics of Focus Group Participants |

|Practitioner users of the AEPS |Early childhood educators who practice in a culturally diverse setting (e.g., Head |

| |Start facility, urban early childhood center, etc.), have been trained in the use of |

| |the AEPS, and have used the AEPS for the purpose of assessment and/or eligibility |

| |determination. |

|AEPS Experts |Highly knowledgeable individuals who provide professional development and technical |

| |assistance to users of the AEPS. |

To recruit the practitioner users of the AEPS, I chose to use the early childhood program where I am currently employed. The program was chosen because it has adopted the AEPS as their assessment tool and service culturally and linguistically diverse children. After explaining the study to the program administrator, early childhood practitioners were invited to participate in the focus group. To identify AEPS experts, I contacted one of the authors of the AEPS for individuals in the northeast Ohio area who currently provide training and technical assistance. The identified experts were directly asked by me to participate in the focus group. Each participant was asked to provide written informed consent after hearing the facilitator describe the study and consent process. The consent form (Appendix A) informed participants that comments made during the focus group will remain confidential and no names will be included in any reports. The participants were also notified and granted consent for the session to be recorded.

Data collection.

Prior to the scheduled focus group, each participant received a copy of the Social-Communication test for review. The participants were asked to look at the test items and review the content, structure, and organization of the test. The participants were also instructed to bring the copy of the test with them to use as a reference during the focus group meeting. I served as moderator of the focus group. According to Krueger and Casey (2000), the moderator of the focus group should have extensive background knowledge of the topic of discussion and be able to place comments in perspective and follow up on critical areas of concern. My extensive knowledge of the AEPS and experience working with children who speak a non-standard dialect of English leads me to believe I would be an appropriate moderator. The focus group took place in a meeting room of an early childhood center where I am employed. The location was chosen because I felt it would be familiar and non-threatening to the participants.

During the pre-session, participants were asked to complete a demographic form (Appendix B). The demographic form asked questions about the participants age, gender, education, years of experience, and experience using the AEPS (e.g., amount of training, background knowledge, ongoing professional development, etc). Refreshments were available during the session and the researcher introduced participants to one another. The participants had approximately 5-10 minutes to engage in small talk before the session began. The time before the focus group allowed the participants to become comfortable with the setting and the other participants.

The session was recorded and I took notes during the session. I took notes on statements or themes that I wanted to follow-up on at without interrupting the participant’s discussion. The recording equipment was placed in plain sight of participants and introduced to participants at the beginning of the session. The participants were told the conversation was being recorded to capture everyone’s comments. As the session began I took 3-5 minutes to introduce the topic and explain the “rules” of the session. I told the participants that there are no correct or incorrect answers. The participants should feel free to follow-up or comment on the statements of others. Participants should feel free to agree, disagree, or give examples throughout the session. Participants should not feel that they have to respond to every question. If participants are dominating the conversation they may be asked to give others a chance to comment or if they are being too quiet that may be called upon directly by the moderator. Table 3 provides a list of questions/topics that were posed during the focus group sessions.

According to Rubin and Rubin (1995) the main questions posed in an interview should cover the overall subject, flow from one to the next, and match the research design. The questions that were developed follow the guidelines outlined by Rubin and Rubin. The first few questions discuss communication development, followed with questions around the AEPS, and finish by inquiring on perceptions of bias. Following the first session the transcripts were analyzed to determine if the questions needed to be modified for the second focus group. After review of the transcript I decided not to modify the questions. I chose to use the same questions because I wanted to compare the responses from the two groups of participants to the exact same questions to see if they differed. Follow up questions were different for the two groups and were based on responses from the participants.

During the focus group, I attempted to remain quiet after asking a question and allowed time for the participants to answer the questions. I took brief notes during the session that were used to generate follow-up questions or notes of comments that may need to be followed up on. I used probes to signal to the participants that more detailed answers or specific examples were needed (Rubin & Rubin, 1995). For example, Rubin and Rubin (1995) suggest the use of a clarification probe to let the participant know it is okay for them to elaborate.

Upon conclusion of the focus group I provided a brief summary of the main points and asked participants if the summary was accurate. I then invited participants to share additional comments, clarifications, or corrections. The participants were also told that they would receive a copy of the outcomes of the study upon request. I also gained permission to contact the participants if any of the responses/comments required follow-up for clarity after the interview was transcribed.

|Table 2 |

|Focus Group Questions/Prompts |

|1. Discuss the scope of communication development. |

|2. How does the AEPS test assess communication development? |

|3. What recommendations do you have regarding the structure of the AEPS test? |

|4. Discuss the clarity of the test items. |

|5. The AEPS social-communication test is being widely used in early childhood programs to determine eligibility and shape program planning. |

|How do you view this usage? |

|6. What are the possible uses for the test? |

|7. A common criticism of tests is that they are not validated for use with the diverse student body we serve in educational environments. |

|Discuss the appropriateness of the AEPS test for assessing the communication development of children who speak a nonstandard dialect of |

|English. |

|8. What items in the test could be considered inappropriate? |

|9. Discuss the range of scores and the use of the scoring notes. |

|10. How could scoring notes be used to identify items that may be inappropriate for children who speak a nonstandard dialect of English? |

Data Analysis.

Following the focus group I transcribed the audio recordings of the interview. Following the transcription, I read the transcripts and took notes of key words, themes, and perspectives that I may not have originally considered. Following the initial read, the “Classic Approach” (Krueger & Casey, 2009), was used to analyze the data contained in the transcripts. Coding consisted of collecting and analyzing statements from the participants. I chose to use the Classic Approach because it allowed me to become intimately involved with the data (Hesse-Biber & Levy, 2006). I read the responses from the individual participants multiple times and felt very comfortable as I analyzed the data. Corbin & Straus (2008) describe coding as interacting with the data by asking questions and making comparisons between the data. I believe the Classic Approach allows me to do this with ease. I followed a five-step analysis process, which Krueger and Casey (2000) outline as follows:

1. Transcripts are cut apart into individual quotes. The source of the quotes will be identifiable through a color-coding process.

2. Each focus group question will be written on a large piece of paper and placed on either a long table or the floor.

3. The quotes will be categorized by question and/or theme. To determine which category the quote “fits” into the Principal Investigator will read each quote and determine if the quote answers the focus group question, adds anything of importance, or is similar to something that was said previously. The goal is to put the quotes into like categories. Quotes that do not answer a question or fit into a category will be set aside and reviewed once all of the quotes have been sorted.

4. Each focus group question will be reviewed and a descriptive summary will be written of how each group of participants responded to the question. The responses from the different groups will be compared and contrasted. When writing the summary frequency of the comments, specificity of the comments, emotion, and extensiveness of how many different people said the something will be taken into consideration.

5. The questions will be then be examined to see what themes are apparent across questions.

During the analysis process if questions emerged that required follow-up for clarity or expansion on a particular topic, an e-mail was sent to either an individual participant or group of participants. The e-mail content was then included in the data analysis. Following analysis of the focus group transcripts a report structured around the focus group questions or themes was written, which summarizes the findings. The findings will be used to inform a subsequent study which will be developed following the pilot study.

Ethical Concerns

Throughout the study multiple efforts were made to remain ethical with the participants. Before the focus group began informed consent was gained from the participant’s. The participants were also explained why I, as the moderator, was taking notes and why the session was being recorded. Following the focus group session I summarized the main findings to ensure to the participants that I was using their statements in the correct context. I also gave the participants an opportunity to clarify or correct any statements that were made. Through these efforts, I believe I was ethical to my participants throughout the study.

Trustworthiness and Credibility

Throughout the focus group process several procedures were used to maintain the credibility of the findings. First, the focus group questions were written in a way that was not misleading or biased. For example, I tried to not incorporate my own bias or assumptions into the questions or my commentary. Before conducting the focus groups, I field-tested the questions with a colleague to ensure that they were clear and understood by others. The colleague read each question and asked me questions if needed to clarify. We then had a discussion as to what my goal was for each questions to ensure the questions were phrased in a way that could elicit responses, which were aligned to my purpose.

During each focus group I remained aware of the need for neutrality to prevent bias and capture the views of all participants. Field notes and audio recordings were used to capture comments, which will be used during data analysis and serve as a component of an audit trail which can be used to confirm the findings of the study. In addition, during the focus group participants will be asked to clarify any responses that were not completely understood by the moderator and/or research assistant. Further, following the last focus group question, I provided a summary of the main points of the discussion and asked the participants if they would like to share additional comments, clarifications, or corrections. According to Lincoln & Guba (1985) these forms of a “member check” are crucial for establishing credibility.

In designing the study and preparing to be the moderator of the focus groups I had to take into consideration by own bias. As I have stated previously, I believe the AEPS is a valid and technically sound assessment that I have found to be useful for determining eligibility, planning instruction, and monitoring progress. When designing and delivery of the questions it was important that I did not let my own perception or bias toward the AEPS influence the participant’s responses. The same hold true when analyzing the data.

Findings

Before examining the data I first reviewed the demographic information obtained for the participants in each of the two focus groups. The first focus group consisted of four preschool intervention specialists from an urban elementary school in Northeast Ohio. The years of experience for the practitioners ranged from 1-30 years, with three of the participants having five or less years experience teaching preschool. Three of the participants hold a Master of Education degree in special education and one participant holds a Bachelor of Education degree. All four participants are certified to teach children with disabilities age 3-8 years old. All of the participants have been using the AEPS for an average of three years either in their practice or graduate coursework. All four of the participants have participated in an initial 2 day training on how to use the AEPS, including how to score the items in the test and summarize results. The participants reported that they do not participate in ongoing professional development on the use of the AEPS.

The second group consisted of three professionals who provide training and technical assistance to users of the AEPS. One of the AEPS experts has a doctoral degree in special education and works for the Ohio State Support Team providing technical assistance to school districts who have an early childhood special education program. The other two AEPS experts are doctoral students in early childhood special education. All three of the participants have been extensively trained in the use of the AEPS and have been using the AEPS (either in practice or in the delivery of professional development) for at least five years.

Social-communication development

Both of the groups had a common definition of communication development. The practitioners described communication development as “Expressing your thoughts and your feelings” and “Being able to answer questions and express your thoughts verbally or maybe even nonverbally, like pointing or using pictures”. The AEPS experts defined communication development as the “form, content, and use of language”. The groups differed in their responses to how the AEPS assesses communication development. The practitioners discussed how cumbersome the AEPS was to score and how the items really did not cover “functional” communication and did not address the social aspect of language. The AEPS experts discussed the structure of the test and the content of the items. They discussed how the AEPS examines both the receptive and expressive components of communication as well as the composition of a child’s utterances, such as the use of the various parts of speech (e.g., indefinite pronouns and irregular verbs).

Summary of responses related to research question

The primary research question asked how experts and practitioners perceive using the AEPS to assess the social-communication of children who speak a nonstandard dialect of English. Responses from the participants support the notion that the AEPS can be used to assess the social-communication of children who speak a nonstandard dialect of English, however the assessor will need to know what the dialect is and the characteristics of the dialect in order to ensure the validity of the scoring. For example, characteristics of Southern Midland Dialect (AKA Appalachian English) include using the past tense of the verb “do” (e.g., “I done it already” instead of “I did it already”) and using “went” instead of “gone” (e.g., she had went down to the store”). It is up to the assessor to be aware of the dialectal characteristics the child possesses and then make a scoring decision.

I asked the participants how they viewed the appropriateness of the AEPS test for assessing the communication development of children who speak a nonstandard dialect of English. Responses included, “you hear a kids talk and you initially want to correct them, but then you hear the parent talk and you realize they are talking just like the parent. Then you are like “oh it is just their environment”. I followed up by asking the practitioners how they would score the items in a situation like the one described. The practitioners responded that they would score the item as incorrect, but never target the item when developing an intervention plan. When asked if there was anything that the assessment could do to gather information about dialect the participants responded, “Like maybe a few questions the parents could answer in the Family Report or something”. The AEPS experts share how the Family Report is used to gather information about the family’s routines and priorities. The parents also score a portion of the AEPS test by answering questions in the Family Report. Both group of professionals believed it would be beneficial to add questions to the Family Report regarding culture and home language. The AEPS experts were concerned how a question could be posed gathering information about dialect without being stereotypical or offensive. They discussed how a person can belong to a specific cultural group or live in a specific region and not possess the characteristics associated with the dialect associated with that said culture or region.

Unforeseen Themes

While the focus group data in many ways spoke to my research question, I noticed other important themes as I read and analyzed the data. The two unforeseen theme that emerged centered around professional development and dual language learners.

Lack of training on use of the assessment.

It was interesting to me that several comments were made during the focus group interviews that led me to believe that their is a lack of training around the use of the AEPS. When I asked the practitioners about the clarity of the items, one of the participants responded, “I just think that maybe the Speech Language Pathologist should do it because they understand it more and I really do not have time to score all those items”. When I asked the practitioners how they view the usage of the AEPS as a programming planning assessment I received the following responses: “A lot of this we doing, just within the curriculum, you know the adjective, describing things, we do those kinds of things every day” and “You can see where he or she is at with their language, but as far as ...I don’t know”. In my extensive training and experience with the AEPS I am confident that the practitioners could benefit from more training as they are not using the AEPS the way it was intended (i.e., assessment, programming planning, and performance monitoring). From the responses that I received I am not confident that the practitioners are scoring and interpreting the test results with fidelity.

In a follow-up email I asked the AEPS experts if they felt the practitioners they work with have adequate training on the use of the AEPS and how much training most practitioners have. One participant responded, “Many of the districts I work with will do an initial day long training and then a half day follow-up during the first year of using the AEPS. Following that I typically do not provide additional training”. The participants were not aware of any school districts that receive ongoing professional development or individual coaching on the use of the AEPS.

Professional development in the form of one-shot workshops or isolated training sessions, in which the learner has a passive role, has been shown to be ineffective in supporting the transfer of what they learned into “real life” settings (Deardorff et al., 2007; Dunst & Raab, 2010; Hall, Grundon, Pope, & Romero, 2010). Research in professional development suggests that workshops and group training sessions should be accompanied by ongoing support and follow-up to promote the actual use of new knowledge in the classroom (Deardorff et al., 2007; Dunst & Raab, 2010; Tate, Thomson, & McKerchar, 2005). Ongoing support and follow-up can take the form of coaching, mentoring, and feedback (Dunst & Raab, 2010), with the learner in a more active, rather than passive, role.

Dual language learners.

During both of the focus groups the topic of dual language learners emerged. During the first focus group when we were discussing how the AEPS could be used to gather information about a family’s language and culture. One of the participants said, “I think their needs to be something that asks questions about the home environment and culture. For example, Petar speaks Turkish at home. Right now he is not able to do some of the items, but it is because he does not speak English at home?”. I had thought about dual language learners as an additional population that could be affected by bias in the assessment items, but decided not to focus my research on this population at this time.

During the second focus group I asked a probing question to see if the participants felt there were any other populations that could be affected by social communication bias. One of the participants responded, “I work with a few school districts that have a large population of dual language learners, primarily Spanish and Croatian speaking families. The AEPS should be scored using responses from the child in the language they are most comfortable with. The purpose of the test is to assess communication, not necessarily English use”. Another participant added, “The question is if you are not familiar with the family how do you go about gathering that information about home language use? It would be helpful if the Family Report had questions that would facilitate a dialogue about what languages are spoken in the home”.

Current research and literature on Dual Language Learners supports the notion that it is difficult to obtain an accurate assessment of children who are learning two languages (Ballantyne, Sanderman, & McLaughlin, 2008). It is important for educational staff to obtain information about a child’s home language use before an assessment occurs.

Discussion and Suggestions for Future Research

As I reflect on the responses given by the participants during the focus group interviews, I have begun to think that I may need to look at my research questions differently. I have also done some additional research on assessments and dialect and have begun to realize that although the research question can be answered, the problem cannot be fixed. Meaning, it would be nearly impossible for an assessment to take into consideration all of the dialects of English. It is really up to the assessor to be knowledgeable of the cultural and linguistic differences that influence their participant’s responses. As I plan how I will proceed with the study following my pilot study, I am now thinking that I would like to change my research question to ask teachers how they feel culture and ethnicity impact the assessment process. Further, I would like to know how practitioners believe they can gather information on culture and how that information can impact later scoring decisions.

Before I begin to plan a subsequent study examining the content validity of the AEPS, I feel it is important to examine effectiveness of the methods of professional development that is currently being used to train practitioners. Many of the responses of the practitioners lead me to believe that they are not using the AEPS as intended. Before a study can be conducted examining the validity of using the AEPS for various populations it must first be established that the test is being administered and used with fidelity.

There are also several technical components of my study that I felt could have been improved upon. When I begin to plan the subsequent study I will use multiple research methods. Hesse-Biber & Leavy (2006), suggest using focus groups to help figure out key issues, ideas, and concerns and then use in-depth interviews or a quantitative survey as the primary method of data collection. After analyzing the data I do not feel that saturation was achieved through the focus group interviews. However, some themes did develop that I would like to explore in greater detail. The participants brought up themes that I had not originally considered. Hesse-Biber & Leavy (2006) suggest using focus groups to generate ideas and then following up the focus groups with in-depth interviews. Although, I did follow up with a few clarifying questions, if time would have permitted I would have liked to develop a new set of interview questions and conduct in-depth interviews with each of the participants.

Although the findings from the study were not as robust as I would have desired I believe I learned a great deal about my topic and more importantly myself as a researcher. The findings from the study and the lessons learned regarding research design and implementation will be invaluable as I plan future research and prepare for my dissertation.

Reflection

I found the process of planning and carrying out a qualitative research study very valuable. Throughout my doctoral program I have had minimal exposure to qualitative research. This is primarily due to the fact that the field of special education is heavily grounded in quantitative research. The “qualitative” research that I have read has been primarily descriptive studies or basic interpretative research without a grounded theoretical framework.

I found myself lost and completely overwhelmed when trying to decipher what theoretical framework I would ground my study in. During our weekly course discussions I often felt like I was one step behind my peers who have had more experience with qualitative research. I have always struggled with trying to understand philosophical perspectives. I believe that is why I have found myself draw to behavioral theories and quantitative research. However, throughout the semester I tried to keep an open mind and work hard to understand the content to the best of my ability. I feel the competencies I gained by completing a qualitative study will be valuable as I move into my dissertation. Although I feel more comfortable with quantitative methodology I can now speak firsthand as to the richness and humanistic aspects that qualitative methods adds to a study.

Through this process I learned valuable lessons for my future research in the area of cultural and linguistic diversity. For example, I now believe I was looking at my topic through the wrong lens and I understand that sometimes you have to plan to use more than one method of data collection to gain the information you are looking for.

I also learned a lot about myself as a researcher and interviewer. I do not believe my focus group went as well as I had planned. I used the same questions for both of my focus groups. I chose to use the same questions because I wanted to compare the answers from the two groups on the exact same questions, however the follow-up questions used were slightly different for the two groups. I felt that I rushed both of my interviews. I did not allow the participants enough time to process my questions. The silence following a question was unbearable for me, even though it was only momentary. I need to develop strategies to allow myself to give the participants adequate time to formulate a response. I also found myself constantly rephrasing the questions, which after review were leading at times. I need to continue to practice and refine my interview skills.

I also did not know how to handle some of my participants responses, for example the participants would say “you know how it is” or “I don’t have to tell you that”. Rubin and Rubin (1995) suggest the use of a clarification probe to let the participant know it is okay for them to elaborate. Before I conduct another interview I think it would be helpful if I practice my use of the different types of probes. Given that this was my initial attempt at conducting a focus group and interviewing I chose not to use a research assistant. I felt the group was small enough that I would be able to ask the questions and take notes without assistance. I can see now where having a second set of ears that is not focused on the questions would be helpful. I was so focused on the questions that I am not sure I truly listened to the responses. After reviewing the transcripts I found several areas where I would have liked to ask follow up questions to gain a deeper understanding and expand the discussion. For example, I should have followed up on comments made during the first interview regarding having a Speech Language Pathologist complete the test. This may have shed more light on what type of professional development is necessary to improve the fidelity of implementation of the test.

I also learned that it is much easier to plan research than to carry it out! I believe I laid out exactly how I anticipated the focus group interview to play out, however I was not able to execute the interview as well as I would have liked. I had some deviation from the plan with everything from the planning of the interview, number of participants, to the moderation of the interview. I have even begun to question how I am looking at my overall purpose and research question. Even though my results were not exactly what I had anticipated the new ideas I now have for future research are very exciting and make me eager to begin planning my dissertation!

References

Ballantyne, K.G., Sanderman, A.R. & McLaughlin, N. (2008). Dual language learners in the early years: Getting ready to succeed in school. Washington, DC: National Clearinghouse for English Language Acquisition.

Retrieved from:

Bricker, D. (Series Ed.). (in preparation). Assessment, evaluation, and programming system for infants and young children (3rd ed., Vols.1-4). Baltimore: Brookes.

Cheatam, G.A., Armstrong, J., & Santon, R.M. (2009). “Y’all listenin”: accessing children’s dialects in preschool. Young Exceptional Children, 12(4), 2-14.

Corbin, J., & Strauss, A. (2008). Basics of qualitative research (3rd ed.). Los Angeles, CA: Sage.

Deardorff, P., Glasenapp, G., Shalock, M., & Udell, T. (2007). TAPS: An innovative professional development program for paraeducators working in early childhood special education. Rural Special Education Quarterly, 26(3), 3-15.

Division for Early Childhood (2007). Promoting positive outcomes for children with disabilities: recommendations for curriculum, assessment, and program evaluation. Missoula, MT: Author.

Dunst, C.J., & Raab, M. (2010). Practitioner’s self-evaluations of contrasting types of professional development. Journal of Early Intervention, 32(4), 239-254.

Dyson, A.H., & Smitherman, G. (2009). The right (write) start: African American language and the discourse of sounding right. Teacher’s College Record, 111, 973-998.

Galagan, J.E. (1985). Psychoeducational testing: turn out the lights the party’s over. Exceptional Children, 52(3), 288-299.

Grant, S.D., Oka, E.R., Baker, J.A. (2009). The culturally relevant assessment of ebonics-speaking children. Journal of Applied School Psychology, 25, 113-127.

Hall, L.J., Grundon, G.S., Pope, C., & Romero, A.B. (2010). Training paraprofessionals to use behavioral strategies when educating learners with autism spectrum disorders across environments. Behavioral Interventions, 25, 37-51.

Harper, F., Braithwaite, K., & LaGrange, R. (1998). Ebonics and academic achievement: The role of the counselor. Journal of Negro Education, 67 (1), 25-34.

Hesse-Biber, S .N., & Leavy, P. (2006). The Practice of Qualitative Research. Thousand Oaks: Sage.

Krueger, R.A. & Casey, M.A. (2000). Focus Groups: A practical guide for applied research, third edition. Thousand Oaks, CA.: Sage Publications, Inc.

Krueger, R. A. & Casey, M. A. (2009). Focus Groups: A practical guide for applied research, fourth edition. Thousand Oaks, CA.: Sage Publications, Inc.

Landa, R.J. (2005). Assessment of social communication skills in preschoolers. Mental Retardation and Developmental Disabilities, 11, 247-252.

Lincoln, Y., & Guba, E. G. (1985). Establishing trustworthiness. In Y. Lincoln & E. G. Guba, Naturalistic inquiry. Beverly Hills: Sage Publications.

Marshall M. N. (1996). Sampling for qualitative research. Family Practice, 13(6), 522-525.

McLean, M. (2000). Conducting child assessments. (Culturally And Linguistically Appropriate Services Early Childhood Research Institute Technical Report #2.) Champaign, IL: University of Illinois at Urbana-Champaign.

National Association of Young Children (1995). Responding to linguistic and cultural diversity: recommendations for effective early childhood education. Retrieved from

National Council of Teacher of English (2005). Supporting linguistically and culturally diverse learners in English education. Retrieved from

National Clearinghouse for English Language Acquisition. (2007). The growing numbers of limited English proficient students:1995/96-2005/06. Washington, DC: retrieved from http:ncela.gwu.edu/policy/states/reports/statedata/2005LEP/GrowingLEP_0506.pdf.

Pretti-Frontczak, K.L. (2002). Using curriculum-based measures to promote a linked system approach. Assessment and Effective Intervention, 27(4), 15-21.

Rubin, H. J., & Rubin, I. S. (1995). Qualitative Interviewing. Thousand Oaks: Sage.

Seymour, H.N. (2004). The challenge of language assessment for African American English-speaking children: a historical perspective. Seminars in Speech and Language, 25(1), 3-12.

Skiba, R.J., Simmons, A.B., Ritter, S., Gibb, A.C., Rausch, M.K., Cuadrado, J. & Chung, C. (2008). Achieving equity in special education: history, status, and current challenges. Exceptional Children, 74(3), 264-288.

Tate, T.L., Thomson, R.H., & McKerchar, P.M. (2005). Training teachers in an infant classroom to use embedded teaching strategies. Education and Treatment of Children, 28(3), 206-221.

Teachers of English to Speaker of Other Languages (1997). Position statement of the TESOL board on African American Vernacular English. Retrieved from

U.S. Department of Health and Human Services. 2007. Head Start Program Information Report, PY2006-2007.Retrieved from

Appendix A

Participant Consent Form

Participant Consent Form

Informed Consent to Participate in a Research Study

Study Title: An Examination of the Social-Communication Test of the AEPS, Second Edition

Principal Investigators: Teresa Brown

You are being invited to participate in a research study. This consent form will provide you with information on the research project, what you will need to do, and the associated risks and benefits of the research. Your participation is voluntary. Please read this form carefully. It is important that you ask questions and fully understands the research in order to make an informed decision. You will receive a copy of this document to take with you.

Purpose:

The purpose of this project is to gain practitioners and AEPS expert’s perceptions of the items contained in the Social-Communication Test of the Assessment, Evaluation, and Programming System (AEPS), 2nd Edition.

Procedures

Participants will engage in a focus group discussion moderated by the principal investigator. Participants will be asked to review the transcripts following the focus group for accuracy or to clarify any points of discussion.

Audio and Video Recording and Photography

See attached video recording consent form.

Benefits

Participants will benefit from participation because they will be sharing their opinions and perceptions that may help to shape future editions of the AEPS and recommended practices for early childhood assessment.

Risks and Discomforts

The risks associated with the proposed project are minimal. You will be asked to share your perceptions of the Social Communication Area of the AEPS during a Focus Group interview. If you unsure how to answer the question it could lead to personal feelings of inadequacy. This is not anticipated to be greater than that which would typcially occur in a school setting.

Privacy and Confidentiality

Your study related information will be kept confidential within the limits of the law. Any identifying information will be kept in a secure location in White Hall, Kent State University, and only the researchers will have access to the data. Research participants will not be identified in any publication or presentation of research results; only aggregate data will be used. Because you are participating in a focus group, confidentiality cannot be assured, however all participants are asked to keep the discussions confidential.

Your research information may, in certain circumstances, be disclosed to the Institutional Review Board (IRB), which oversees research at Kent State University, or to certain federal agencies. Confidentiality may not be maintained if you indicate that you may do harm to yourself or others.

Compensation

Not applicable

Voluntary Participation

Taking part in this research study is entirely up to you. You may choose not to participate or you may discontinue your participation at any time without penalty or loss of benefits to which you are otherwise entitled. You will be informed of any new, relevant information that may affect your health, welfare, or willingness to continue your study participation

Contact Information

If you have any questions or concerns about this research, you may contact Alicia Crowe at 330-672-0634. This project has been approved by the Kent State University Institutional Review Board. If you have any questions about your rights as a research participant or complaints about the research, you may call the IRB at 330.672.2704.

Consent Statement and Signature

I have read this consent form and have had the opportunity to have my questions answered to my satisfaction. I voluntarily agree to participate in this study. I understand that a copy of this consent will be provided to me for future reference.

________________________________ _____________________

Participant Signature Date

VIDEO/AUDIO CONSENT FORM

An Examination of the Social-Communication Test of the AEPS, Second Edition.

Principal Investigator: Teresa Brown

VIDEO/AUDIO TAPE CONSENT FORM

I agree to be video taped during the focus group discussions. I understand that the content of the video tape will be transcribed for purposes of this research project and that I will have the opportunity to review the transcript. The contents of the video tape will be destroyed after transcription.

[pic] Yes, I would like to review the transcript of the video tape.

[pic] No, I do not wish to review the transcript of the video tape.

_____________________________________ ________________________________

Signature Date

Appendix B

Participant Demographic Form

1. Circle your age group:

a. 20-25 b. 25-30 c. 31-35 d. 36-40 e. 40-45 f. 45+

2. Circle your highest level of education

a. Associate’s Degree

b. Bachelor’s Degree

c. Master’s Degree

d. Doctoral Degree

3. Circle your years of professional experience in education

a. less than 5 b. 5-10 c. 11-15 d. 16-20 e. 20-25 f. 25+

4. How long have you been using the AEPS?

5. How much training have you had on use of the AEPS?

6. How much ongoing professional development do you have on using the AEPS?

7. How often do you use the AEPS and for what purposes?

Appendix C

Partial Transcript of Focus Group 1

I: When you think of communication development, what does that mean to you? What do you think communication development is? What are the components of it?

K: Labeling items in your environment, communicating your wants and needs

B: Yeah, I was just going to say that

B: Expressing your thoughts and your feelings

T: Me too

B: Building on...um starting with small steps and then lengthening your sentence structure and building onto that at the child develops

A: Being able to answer questions and express your thoughts verbally or maybe even nonverbally, like pointing or using pictures

B: Signing too

T: Helping to build relationships

I: The social component?

T: Yeah

I: How does the AEPS test assess communication development?

B: Seems like it breaks it down into the small portions, umm, all the components that make up part of communication, but I do not see a lot of social within it.

T: I always thought it broke it down almost too much...some of things don’t seem like it is geared toward functional communication such as...uses auxiliary verbs...that really doesn’t tell us too much.

K: Like it says uses the “to be” what three year old says, “to be”

A: Yeah it looks at every part of speech, but not really how you use your speech

I: What recommendations do you have regarding the structure of the AEPS test? Do you think things about the structure could be improved? How it is set up and how you actually score the items.

T: I have a really hard time using the summary form...there is so much on it. It is so hard throughout the day to say, “oh I think I heard this”

B: Yeah, I agree the other portions are not so hard...if you are able to get those items. It just seems like it is overwhelming or maybe repetitive...you have already scored it and then you go back and summarize.

I: Do you think the items are clear?

B: Some of them I didn’t know what some of them were like indifferent pronouns- I didn’t know what those were

T: like the auxiliary verbs

K: This is more of communication rather than social communication and I just think that maybe the SLP should do it because they understand it more and I really do not have time to score all those items.

I: The AEPS is being widely used in early childhood programs to determine eligibility and shape program planning. When you look at the social-communication area, how do you view this usage? How can you take this information and use it to decide what you are going to do with a child in your classroom?

K: You can see where he or she is at with their language, but as far as ...I don’t know

B: Some of them I would use for goals. For example “uses WH question”. I wouldn’t like Keri was saying I don’t think I would not use thinks like the pronouns I would not be that specific in my goal. I don’ know. Like Terri was saying I don’t think a lot of it leads to functional communication.

B: A lot of this we doing, just within the curriculum, you know the adjective, describing things, we does those kinds of things every day.

I: What are the possible uses for the test?

I: Like you guys just said, you would go through the items and pick out the things the child could not do?

B: That’s how I would use it.

A: I use it to see what items the child has not mastered and then decide if it is something that should be worked on more or addressed in the IEP.

K: I would probably talk to the speech person and maybe go over the results and say here is what it says for past tense or pronouns...what do you think?

B: There is nothing there on initiating conversations

K: there was something...maybe not exactly initiating

B: How do they go up and get involved with a group of children

K: Here it is...responds to topic initiations from others

B: It does not really say anything about how they go and get involved with a group of children.

I: A common criticism of tests is that they are not validated for use with the diverse student body we serve in educational environments. What do you think of the appropriateness of the AEPS test for assessing the communication development of children who speak a nonstandard dialect of English? Do you think there are items in the test that are bias for kids that do not speak a nonstandard dialect of English?

K: Yes, I think a lot of it is you hear a kids talk and you initially want to correct them, but then you hear the parent talk and you realize they are talking just like the parent. Then you are like “oh it is just their environment”.

T: Right

A: Especially if you know the family and there are siblings and they all seem to talk the same way.

I: So, is it an inability to use the words or is it just their environment? Is it a disability or culturally related?

T: I think it is culturally related

K: I think it is culturally too... they are with us 2.5 hours a day and then they leave us and go home and if that is what they hear when they get home then that is the way they are going to talk.

I: If you score the items the way it is suppose to be scored and you know there are items that may be biased like some of the pronouns or verb tenses and you scored them incorrect then what would you do? Would you target those items on an IEP?

T: I think it would depend if it is hindering their communication

K: Like you were saying are they absolutely not able to do it or is just their environment or situation? I think you have to take that into consideration.

B: I think I would just model proper communication and then maybe correct them “you say it like this”, but I would never target it.

A: I agree

I: Do you see this applying to some of your students?

B: Oh yeah, for sure especially “him/her” and if they are five by now we have modeled it for a few years and if they still do not get it then that is probably what is at home and in there environment. As long as we keep modeling appropriate language...

I: what items do you think may be an issue?

B: Pronouns and verb tensing

A: Yeah

I: Do you think there is anything that the assessment can do...anything with the scoring to indicate that these items could be bias?

T: Maybe some type of symbol, like an asterisk to indicate that the item may be bias.

K: Maybe it should have a portion about their home environment. Like I know when the SLP is talking with parents during a meeting she will say I notice that you speak this way and I know it may not be right of her to say, but she does bring it up. Although that sometimes offends the parents.

I: How can you gather that information in a more socially appropriate way?

B: Just talk to the parent you will pick it up.

K: I think their needs to be something that asks questions about the home environment and culture. For example, Petar speaks Turkish at home. Right now he is not able to do some of the items, but it is because he does not speak English at home.

I: That is another concern I have is dual language learners

K: I think there should be a way to record that information.

A: Like maybe a few questions the parents could answer in the Family Report or something.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download