Exam 3 Study Guide

docx

School

Arizona State University *

*We aren’t endorsed by this school

Course

290

Subject

Psychology

Date

Dec 6, 2023

Type

docx

Pages

6

Uploaded by BailiffField3321

Report
1. Distinguish measured from manipulated variables in a study. Measured variable: records of behavior or attitudes (self-reports, behavioral observations, physiological measures). No manipulation of variables, just recording what happens. Manipulated variable: variable that is controlled and participants are assigned to different levels. 2. Identify an experiment’s independent, dependent, and control variables. Independent variable: manipulated (causal) variable. Dependent variable: outcome variable. How the participant acts on the measured variable. Control variable: any variable that an experimenter holds constant on purpose. 3. Use the three causal criteria to analyze an experiment’s ability to support a causal claim. Covariance: do the results show that the causal variable is related to the effect variable? Are distinct levels of the independent variable associated with different levels of the dependent variable? Temporal precedence: Does the study design ensure that the causal variable comes before the outcome in time? Internal validity: Does the study design rule out alternative explanations for the results? 4. Explain why control variables can help an experimenter eliminate design confounds. They keep the levels the same for all participants; allow researchers to separate one potential cause from another and thus eliminate alternative explanations for results. 5. Describe random assignment and explain its role in establishing internal validity. The use of a random method (ex. Flipping a coin) to assign participants into different experimental designs; ensures that every participant in an experiment has an equal chance to be in each group. Use random assignment to avoid selection effects, results in fairly even distributions. 6. Describe matching, explain its role in establishing internal validity, and explain situations in which matching may be preferred to random assignment. Role in Internal Validity: Still done with random assignment and ensures that the groups are equal on some important variable (ex. IQ) before the manipulation of the independent variable → takes care of selection effects. Matching might be better when there is a smaller n and shared characteristics are more noticeable (ex. 6 people with high IQ → 3 in Group 1, 3 in Group 2 but randomization is preferable. 7. Describe how the procedures for independent groups and within-groups experiments are different. Explain the pros and cons of each type of design. Independent groups are situations where each participant is placed into different levels of the IV and only sees one level of the IV. That is each level of the IV are independent of the other. Within-group design is when each participant experiences all levels of the IV. 8. Identify posttest-only and pretest/posttest designs and explain when researchers might use each one. The posttest-only design is used when exposing participants to the experiment before may have an adverse effect on the results sensitize participants to the purpose of the study. The pretest/posttest is used when there is a need for comparing between the before and after for the DV of each group. This is chosen when there is the possibility of the DV changing over time, also when you are worried about attrition in the study, or if there is a small group and you want to do matched pair design. 9. Explain the difference between concurrent-measures and repeated-measures designs. Concurrent measures : participants are exposed to all the levels of an independent variable at roughly the same time, and a single attitudinal or behavioral preference is the dependent
variable. Repeated measures : participants are measured on a dependent variable more than once-that is, after exposure to each level of the independent variable. 10. Describe counterbalancing and explain its role in the internal validity of a within-group design. The levels of the independent variable are presented to participants in different orders (ex. Some mothers played with their child first, others played with another's child first, then switched) 11. Interrogate the construct validity of the measured variable in an experiment. For example, you could ask how well the researchers measure their dependent variable: perceived temperature. In this study, the researchers measured how cold people felt by asking them to estimate the temperature in the experimental room. The fact that people's estimates of room temperature did vary in a predicted way (the room felt colder to those who remembered being rejected) is also some evidence for this measure's construct validity. 12. Interrogate the construct validity of a manipulated variable in an experiment, and explain the role of manipulation checks and theory testing in establishing construct validity. How well did the researcher manipulate the independent variables? Manipulation check: extra dependent variable that researchers can insert into an experiment to convince them that their experimental manipulation worked (ex: manipulating feelings of anxiety by telling participants they will have to give a speech) Theory testing: Experiments are designed to test theories. Interrogating the construct validity of an experiment requires you to evaluate how well the measures and manipulations researchers used in their study capture the conceptual variables in their theory. 13. Interrogate two aspects of external validity for an experiment (generalization to other populations and to other settings). Generalizing to other people: make sure there is random sampling. Generalizing to other situations: consider the results of other research to determine if the present research is consistent with prior studies. 14. Explain why experimenters usually prioritize internal validity over external validity when it is difficult to achieve both. They want a clean manipulation, but being in a lab may cause participants to be unrepresentative. They sacrifice real-world rep. for internal validity to find results without confounds. 15. Identify effect size d and statistical significance, and explain what they mean for an experiment. - knowing this tells u the results were probably not drawn by chance from a population that says there is no difference between the groups - EFFECT SIZE: some studies use large samples but even tiny differences might be statistically significant; therefore, asking about effect size can help u evaluate the strength of the covariance (the degree to which two variables go together effect size d: - this lets u know how far apart two experimental are on the DV - when d is larger, it usually means the IV caused the DV to change for more of the participants in the study - when it is small, it usually means the scores of the participants in the 2 experimental groups overlap more. ex: d= 0.20 small or weak d= 0.50 medium, moderate d= 0.80 strong
16. Review three threats to internal validity: design confounds, selection effects, and order effects. design confounds-alternative explanation for results in a poorly designed study (another variable systematically varies w/ IV) -selection effects-alternative explanation b/c of different IV groups having different types of people (between-groups) -order effects-outcome may have been caused by order in which level of IV are presented (within-groups) 17. Identify the following nine threats to internal validity: history, maturation, regression, attrition, testing, instrumentation, observer bias, demand characteristics, and placebo effects. Maturation: A change in behavior that emerges more or less spontaneously over time. History: An external event that affects most/all members in the study - at the time of the treatment. Regression: When a performance is extreme at Time 1, the performance is likely to be less extreme at Time 2 (closer to a typical or average performance). Attrition: When a certain kind of participant drops out of a study for some systematic reason. Testing: Kind of order effect in which participants tend to change as a result of having been tested before. Instrumentation: When a measuring instrument changes over repeated use (ex. graders change their standards over time (more strict or more lenient) and may cause for participants to appear to have changed) OR (2) the alternative forms of the test are not sufficiently equivalent. Observer Bias: When the experimenter's expectations influence their interpretation of the results which threatens internal validity (APEs) and construct validity (ratings do not represent true levels of the construct). Demand Characteristics: When participants guess what the study is about and change their behavior in the expected direction. Placebo effect: When participants improve but only because they believe they are receiving effective treatment. 18. Explain how comparison groups, double-blind studies, and other design choices can help researchers avoid many of these threats to internal validity. Double-blind: controls for observer bias and demand characteristics b/c nobody can make assumptions. masked design: controls for observer bias and DC (observers don't know which level part. are in and part. don't know the reason they are assigned to a group). double-blind placebo: controls for placebo effect too. (Neither the people treating the patients, nor the patients know if they are in the real study group) comparison groups: needed to show a real difference, not confound. 19. Articulate the reasons that a study might result in null effects: not enough variance between groups, too much variance within groups, or a true null effect. Null effect: when the independent variable does not make a difference in the dependent variable 1: there is no meaningful interaction between IV and DV 2: a study was not designed or conducted carefully enough 3: obscuring factor prevents researchers from detecting the difference 4: too much unsystematic variability within each group 20. Describe at least two ways that a study might show inadequate variance between groups and indicate how researchers can identify such problems. Weak manipulations: not enough variation to produce a difference in behavior. Ask about how the independent variables were operationalized. Insensitive measures: calibration doesn't recognize small enough differences.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
21. Explain why large within-group variance can obscure a between-group difference. Noise: unsystematic variability within each group. Less variability means larger effect size. Easier to see the effect of the independent variable. 22. Describe three causes of within-group variance—measurement error, individual differences, and situation noise—and indicate how each might be reduced. Measurement error: human or instrument factor that can inflate or deflate a person's true score on the dependent variable . Use reliable, precise tools; measure more instances. Individual differences: people have different responses to different situations. Change the design; add more participants, Situation noise: external distractions, Control surroundings, Within-group variance is also sometimes called power. 23. Articulate how a factorial design works. A design in which there are 2+ independent variables (factors). Researchers cross the two independent variables, study each possible combination of the IVs. 24. Explain two reasons to conduct a factorial study. They test whether IV effects different kinds of people, or people in different situations in the same way. Can be used to test theories. Best way to study how variables interact is to combine them in a factorial design & measure whether the results are consistent w/ the theory 25. Review studies with one independent variable, which show a simple “difference.” Spaghetti box and serving size given experiment. One IV and one DV. No matter what the product, when same amount is in a large container, people use more of the product than when it is in the small container 26. Describe an interaction as a “difference in differences.” Whether the effect of the original independent variable depends on the level of another independent variable. The effect of one IV depends on the level of the other IV. When people eat icecream they like their food cold more than hot, when people eat pancakes they like their food hot more than cold. You like ice cream cold more than you like hot (cold minus hot is a positive value), but you like pancakes cold less than you like them hot (cold minus hot is a negative value). 27. Describe interactions using terms such as “it depends” or “especially for.” Does the effect of the original independent variable depend on the level of another independent variable? whether the effect of the original IV (cell phone use) depends on another IV (driver age). Example: Does the effect of cell phones depend on age? 28. Estimate marginal means in a factorial design, to look at main effects. You compare two levels of one independent variable by taking the average (for the other independent variable) of both levels. Figure 12.13 shows it well, they have it labeled "main effect", but marginal means measure the main effect. 29. Identify interaction effects two ways: in a table and in a graph. Page 321. Subtraction in a table down the columns, if they differ, there is an effect. -On a graph, the lines are "touching" *Crossover *Spreading 30. Given a factorial notation (e.g., 2 × 2), identify the number of independent variables, the number of levels of each variable, the number of cells in the design, and the number of main effects and interactions that will be relevant. The notation for factorial designs follows a simple pattern. Factorials are notated in the form "__ by __." The quantity of numbers indicates the number of independent variables (a 2x3 design is represented with two numbers, 2 and 3). The value of each of the numbers indicates how many levels there are
for each independent variable (two levels for one and three levels for the other). When you multiply the two numbers, you get the total number of cells in the design. 31. Explain how quasi-experiments can be either independent-groups designs or within- groups designs. Quasi-experiment: researchers do not have full experimental control. Groups are chosen for them by causes out of their control. 32. Define the following quasi-experimental designs: nonequivalent control group design, interrupted time-series design, and nonequivalent groups interrupted time-series design. Non-equivalent control group design: matching (sometimes ex post facto; after the fact) to try and equalize the groups and decrease the likelihood of confounds. One treatment and one comparison group were not randomly assigned. Interrupted time-series design: In the simplest time-series design, we observe and measure a single experimental group before, during, and after an event (repeated measures). Nonequivalent control group interrupted time-series design: a combination of two previous designs. 33. Explain whether different quasi-experimental designs avoid the following threats to internal validity: selection, maturation, history, regression, attrition, testing, instrumentation, observer bias, experimental demand, and placebo effects. Selection: This threat only applies to independent group design and is avoided if the experiment design can eliminate the differences between the independent and comparison groups. Maturation: this threat is avoided with the use of a comparison group and/or the results of the study (maturation will not reverse itself). History: not avoided but the likelihood is assessed. Regression: not avoided due to the fact that quasi-experiments do not use random assignment. Attrition: the threat is avoided by only including those who complete the study. Testing: avoided by comparison group and results of the study. Instrumentation: avoided by comparison group and results of the study. Observer Bias: avoided if participants are asked to self-report. Experimental Demand: not avoided. Placebo Effect: avoided if the comparison group is the placebo group. 34. Using both the design and the results, analyze whether a quasi-experimental design allows you to rule out internal validity threats. The design and results of the study are important in ruling out internal validity threats given that there is a comparison group and results end up as expected. 35. Explain the trade-offs of using a quasi-experimental design. Quasi-experimental designs enable researchers to take advantage of real-world opportunities to study interesting phenomena and important events. • Can enhance external validity with the likelihood that patterns observed in the quasi- experiment will generalize to other settings and individuals. • Ethically, people use quasi-experiments because it would be unethical to randomly assign individuals to be in a certain situation. 36. Interrogate quasi-experimental designs by asking about construct validity, external validity, and statistical validity. Construct: Interrogate how successfully the study manipulated or measure its variables (IV and DV). External: You don't have to ask whether the judicial decision of making the study applies to real-world settings because the study occurred in a real-world setting. Might want to ask if we can generalize it to other demographics/populations. Statistical: know how large the group differences are (the effect size) and whether the results are statistically different.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help