Soc 2155



Soc 2155 Review for Final Exam

I. Ethics/Human Subjects

A. Code of ethics

1. Voluntary participation (why have? Downside from a science standpoint?)

2. Harm to participants

• Psychological (Milgram) or physical (Tuskegee)

3. Informed consent

• Anonymity and Confidentiality

• Deception/debriefing

4. analysis and reporting (don’t fudge your data)

B. IRB (what are they, what do they do, where is the IRB for UMD?)

II. Field Research

A. Field research

1. Define/understand what it means (an “insiders view”)

2. In context of validity and reliability

3. The “Chicago School” as sociological field research approach

B. Types of social life appropriate for field research:

• Relationships

• Groups

• Organizations

• Settlements (villages, ghettos, neighborhoods)

C. The Role of the Researcher (observer)

1. From non-participant (unobtrusive research) to complete participant.

2. Complete participant: join an organization you want to study

• People see you as a participant, not as a “researcher.”

• Must “walk the walk” and “talk the talk”

• Ethics versus scientific value (a “deceptive” technique).

o “Going Native”

D. Limitations (drawbacks) of this type of research

• Reflexivity

E. Relations to subjects

1. Pretend to join versus “getting swept up”

2. Social problems, power differences

F. Qualitative Research Methods (what do each involve, examples of)

1. Naturalism

▪ Ethnography

2. Ethnomethodology

3. Grounded theory

4. Case studies/extended case method

5. Institutional ethnography

6. Participatory action research

G. Conduction qualitative research

1. Preparing for the field (key informants, establish rapport…)

2. Qualitative interviewing (difference between this and survey interviews)

• The miner versus the traveler

• Responses to initial questions shape future questions

• Probing, “Guided conversation,” “socially acceptable incompetent”

3. Focus groups (what they are; upside, downside)

4. Recording observations

III. Unobtrusive Research

A. What is it? General upside?

B. Content Analysis

1. Examples of “human communication”

2. Units of analysis versus sampling units

• Sampling techniques

3. Coding

• Manifest and latent coding

• Conceptualization / Creating Code Categories (rules)

4. Strengths and weaknesses of content analysis

C. Analyzing “existing statistics”

1. Different from secondary data

• “official” statistics

2. Durkheim’s analysis of suicide

3. The “Decommdificatoin” index

4. Units of analysis: typically macro level data

• Danger of leaping to conclusions about individuals (ecological fallacy)

5. Validity and Reliability:

o Dependent upon what has been collected and how it was collected

o Do the statistics accurately report what they claim to report?

D. Comparative/Historical Research

1. Examples:

o Augustus Compte

o Karl Marx

o Max Weber

2. Data sources

IV. Qualitative Data Analysis

A. Continuing “back and forth” between theory and data collection/analysis.

B. Although qualitative research is sometimes purely for descriptive purposes,

most times there is an explanatory purpose—researchers are looking for some sort

of pattern in their data.

• Frequencies (how often does X happen)

• Magnitude (level of X)

• Structures (different types of X)

• Causes (causes of X, is X more common among some than others?)

• Consequences (how did X impact other factors)

C. Cross-case versus variable oriented analysis

D. Grounded theory method

1. Constant comparison method

E. Semiotics

1. The “science of signs”

2. Goffman study

3. Latent meaning

F. Conversation Analysis

1. Assumptions

G. Qualitative Data Processing

1. Process of coding and retrieving

• Often times though, you are coding portions of a text (transcript, book text, etc).

2. Coding units

3. The process of coding text

• Old = codes written on the margin of the text

• Now = computer programs analogous to SPSS for doing this (Nudist, or “vivo”).

4. Creating codes

• Open Coding

• Hierarchical codes

5. Memoing

• Code Notes, theoretical notes, operational notes

6. Concept mapping

V. Evaluation Research

A. Refers to a purpose rather than a specific method

B. Why evaluation research as become a lot more popular as late

C. Evaluation is a form of “applied research”

1. But—connection to basic research

D. Types of Evaluation Research

1. Needs assessment

2. Cost-Benefit study

3. Monitoring studies

4. Program evaluation

• Process versus outcome study

E. Measurement

1. What is the programs supposed to do? What is the expected outcome?

2. It is essential that the researcher and program folks achieve agreements over what outcomes will be observed and how they will be measured.

a. Are these outcomes amenable to measurement?

b. Can the data for these outcomes be obtained?

F. Specifying Interventions (Intermediate Objectives)

1. Was the program/intervention implemented as intended?

• Did people actually attend job training sessions?

• Did the session proceed as envisioned?

• Were certain program providers more effective at “doing” the program?

2. Who are the participants? (What is the population of interest?)

• The intervention/program might work for only some of the people

E. Operationalizing success and failure:

1. The “how much is enough” question

2. Cost benefit analysis as an answer to this issue:

▪ E.g. 3 strikes versus childhood prevention or rehabilitation programs.

3. Ultimately, the criteria for “success” or “failure” are a product of agreement.

F. Evaluation Research Designs

1. Experimental designs

• What is the classical experimental design?

• Problems getting random ssignment?

• Ethical issues with regard to “placebo?”

2. Getting inside the “black box”

a. Need to actually measure what happens in the program/intervention.

• Therapy provided? Did clients show up? If different therapists are involved, did all therapists essentially to the same thing?

2. Quasi-experimental designs

3. Time Series

• Problem with that? (Rise or drop due to something else?)

• Potential solution = “multiple time series” design

4. Qualitative Evaluations (example)

G. Logistical Problems

1. Evaluation research tends to be “messy” (why)

2. Administrative Control

H. Ethical Issues

1. Sometimes the intervention itself raises ethical issues

2. Control group members

3. Unethical behavior in the guise of evaluation research

• Tuskegee, AL study of syphilis with poor, black men

4. Program evaluation versus organizational needs (more “conflict” but can lead

to ethical dilemmas).

• What happens if you get to know all of the people you are evaluation, become friends, and then find out that the program doesn’t work? Talking about people’s livelihoods here.

I. Use of Research Results

1. Why are research results ignored? Why recommendations not followed?

• Not presented in a way that non-scientists can understand

• “True believers” (Results contradict deeply held beliefs)

• Vested interests

VI. Quantitative Data Analysis

A. Descriptive Statistics

1. Purpose, types (dispersion and central tendency, specific measures of each)

B. Bivariate Statistics

1. Contingency Tables

• Where appropriate (level of measurement)

• Construction (where do IV and DV go?). Be able to sketch one out given two variables (and variable attributes).

2. Inferential Statistics

• Why needed? (sampling error)

• Research versus null hypothesis; always test null hypothesis

• Chi-squared, how it is used

• Sampling distribution

• “p value” from SPSS output (what exactly does it mean)

• Be able to interpret a contingency table and Chi-square SPSS output

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download