Example: confidence

Experimental Method - Indiana University Bloomington

Y520 Spring 2000 Page 1 Michael Y520 Experimental Method The best Method indeed the only fully compelling Method of establishing causation is to conduct a carefully designed experiment in which the effects of possible lurking variables are controlled. To experiment means to actively change x and to observe the response in y (p. 202). Moore, D., & McCabe, D. (1993). Introduction to the practice of statistics. New York: Freeman. The Experimental Method is the only Method of research that can truly test hypotheses concerning cause-and-effect relationships. It represents the most valid approach to the solution of educational problems, both practical and theoretical, and to the advancement of education as a science (p. 298). Gay, L. R. (1992). Educational research (4th Ed.). New York: Merrill.

Y520 — Spring 2000 Page 2 True Experimental Designs A. Randomized Post-test only Control Group Design Treatment R X1 O R = random assignment Comparison R X2 O X = Treatment occurs for X1 only O = Observation (dependent variable)

Tags:

  Tests, University, Methods, Experimental, Indiana, Indiana university bloomington, Bloomington, Method experimental

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Experimental Method - Indiana University Bloomington

1 Y520 Spring 2000 Page 1 Michael Y520 Experimental Method The best Method indeed the only fully compelling Method of establishing causation is to conduct a carefully designed experiment in which the effects of possible lurking variables are controlled. To experiment means to actively change x and to observe the response in y (p. 202). Moore, D., & McCabe, D. (1993). Introduction to the practice of statistics. New York: Freeman. The Experimental Method is the only Method of research that can truly test hypotheses concerning cause-and-effect relationships. It represents the most valid approach to the solution of educational problems, both practical and theoretical, and to the advancement of education as a science (p. 298). Gay, L. R. (1992). Educational research (4th Ed.). New York: Merrill.

2 Importance of Good Design: ( * ) 100% of all disasters are failures of design, not analysis. Ron Marks, Toronto, August 16, 1994 To propose that poor design can be corrected by subtle [statistical] analysis techniques is contrary to good scientific thinking . Stuart Pocock (Controlled Clinical Trials, p 58) regarding the use of retrospective adjustment for trials with historical controls. Issues of design always trump issues of analysis. Dallal, 1999, explaining why it would be wasted effort to focus on the analysis of data from a study under challenge whose design was fatally flawed. Unique Features of Experiments: 1. The investigator manipulates a variable directly (the independent variable). 2. Empirical observations based on experiments provide the strongest argument for cause-effect relationships.

3 Additional features: 1. Problem statement theory constructs operational definitions variables hypotheses. 2. The research question (hypothesis) is often stated as the alternative hypothesis to the null hypothesis, that is used to interpret differences in the empirical data. 3. Random sampling of subjects from population (insures sample is representative of population). 4. Random assignment of subjects to treatment and control (comparison) groups (insures equivalency of groups; ie., unknown variables that may influence outcome are equally distributed across groups). 5. Extraneous variables are controlled by 3 & 4 and other procedures if needed. 6. After treatment, performance of subjects (dependent variable) in both groups is compared. Ways to control extraneous variables: 1.

4 Random assignment of subjects to groups. This is the best way to control extraneous variables in Experimental research. Provides control for subject characteristics, maturation, and statistical regression. 2. Variables that may still exist: a. Subject mortality ( , dropouts due to treatment) b. Hawthorne effect c. Fidelity of treatment (manipulation check) d. Data collector bias (double blind studies) e. Location, history 3. Additional procedures for controlling extraneous variables (use as needed) a. Exclude certain variables. b. Blocking. c. Matching subjects on certain characteristics. d. Use subject as own control.

5 E. Analysis of covariance. Y520 Spring 2000 Page 2 Michael Y520 True Experimental Designs A. Randomized Post-test only Control Group Design Treatment R X1 O R = random assignment Comparison R X2 O X = Treatment occurs for X1 only O = Observation (dependent variable) This is the best of all designs for Experimental research. Random assignment controls for subject characteristics, maturation, statistical regression. Potential threats not controlled: subject mortality, Hawthorne effect, fidelity of treatment, data collection bias, unique features of location, history of subjects. B. Randomized Pretest Post-test Control Group Design Treatment R O1 X1 O2 R = random assignment Comparison R O1 X2 O2 X = Treatment occurs for X1 only O1 = Observation (Pre-test) O2 = Observation (Post-test, dependent Potential threat: Effect of pre-testing.)

6 Variable) C. Randomized Solomon Four Group Design Treatment R O1 X1 O2 R = random assignment Comparison R O1 X2 O2 X = Treatment occurs for X1 only O1 = Observation (Pre-test) Treatment R X1 O2 O2 = Observation (Post-test, dependent Comparison R X2 O2 variable) Random sampling, random assignment. Best control of threats to internal validity, particularly the threat introduced by pretesting. Requires a relatively large number of subjects. D. Randomized Assignment with Matching 1. Randomized (Sampling & Assignment), Matched Ss, Post-test only, Control Group Treatment M,R X1 O M = Matched Subjects R = Random assignment of matched pairs Comparison M,R X2 O X =Treatment (for X1 only) O = Observation (dependent variable) Example: An experimenter wants to test the impact of a novel instructional program in formal logic.

7 The investigator infers from reports in the literature that high ability students and those with programming, mathematical, or music backgrounds are likely to excel in formal logic regardless of type of instruction. The experimenter randomly samples subjects, looks at subjects SAT scores, matches subjects on basis of SAT scores and randomly assigns matched pairs (one of each pair to each group). The other concominant variables (previous programming, mathematical, and music experience) could also be matched. Y520 Spring 2000 Page 3 Michael Y520 2. Randomized Pretest-Post-test Control Group, Matched Ss Treatment O1 M,R X1 O2 O1 = Pretest M = Matched Subjects Comparison O1 M,R X2 O2 R = Random assignment of matched pairs X =Treatment (for X1 only) O2 = Observation (dependent variable) Subjects are matched on the basis of their pretest score and pairs of subjects are randomly assigned to groups.

8 3. Matching methods a. Mechanical matching 1). Rank order subjects on variable, take top two, randomly assign members of pairs to groups. Repeat for all pairs. 2). Problems: Impossible to match on more than one or two variables simultaneously. May need to eliminate some Ss due to no appropriate match for one of the groups. a. Statistical matching b. Statistical Matching 1). The purpose is to control for factors that cannot be randomized but nonetheless can be measured on (at least) an interval scale (but in practice we often treat ordinal scales as if they were interval). Statistical control is achieved by measuring one or more concomitant variables (referred to as the covariate ) in addition to the variable (variate) of primary interest ( , the dependent or response variable).

9 Statistical control can be used in Experimental designs and because no direct manipulation of subjects or conditions is required, it can also be used in quasi-expermential and non- Experimental designs. 2). Analysis of covariance is used to test the main and interaction effects of categorical variables on a continuous dependent variable, controlling for the effects of selected other continuous variables which covary with the control variable is called the covariate . (http: ). 3). To control a covariate statistically means the same as to adjust for the covariate or to correct for covariate, or to hold constant or to partial out the covariate. ( ) 4). But see: Loftin, L., & Madison, S. (1991). The extreme dangers of covariance corrections.

10 In B. Thompson (Ed.), (1991). Advances in educational research: Substantive findings, methodological developments (Vol. 1, pp. 133-148). Greenwich, CT: JAI Press. (IBSN: 1-55938-316-X) Thompson, B. (1992). Misuse of ANCOVA and related "statistical control" procedures. Reading Psychology, 13, iii-xviii. Y520 Spring 2000 Page 4 Michael Y520 Pre- Experimental Designs A. One-Shot Case Study X O X = treatment O = Observation (dependent variable) Problems: No control group; cannot tell if treatment had any effect. Comments from Campbell and Stanley (1963): As has been pointed out ( , Boring, 1954; Stouffer, 1949) such studies have such a total absence of control as to be of almost no scientific value (p. 6). Basic to scientific evidence (and to all knowledge-diagnostic processes including the retina of the eye) is the process of comparison, of recording differences, or of contrast.


Related search queries