Research Design Discussions
Exercise 1 – Research Design Validity
Discuss the importance of validity and research design.
Next, choose one type of validity (internal, external, construct, or statistical conclusion) and discuss its relevance to experimental, quasi-experimental, and non-experimental research.
Exercise 2 – Comparison Groups
Comparison groups are one of the important elements to the scientific control of a research design.
Choose one type of comparison group from the list provided in the book and expand upon how the inclusion of this type of comparison group would improve the overall validity of the findings.
Exercise 3 – Control Techniques
Control is an important element in any type of research.
Considering experimental research, come up with a hypothetical research scenario and apply each of the five types of control to the scenario. Use specific examples to illustrate your point.
Exercise 4 – Establishing Cause and Effect
No unread replies.No replies.
What are the major differences between experimental, quasi experimental, and non-experimental research?
Discuss the three major conditions to meet cause and effect (be sure to review your text for further information). Provide a typical experimental “weakness” that wouldn’t allow a researcher to determine cause and effect.
Chapter 1 A Primer of the Scientific Method and Relevant Components
The primary objective of this book is to help researchers understand and select appropriate designs for their investigations within the field, lab, or virtual environment. Lacking a proper conceptualization of a research design makes it difficult to apply an appropriate design based on the research question(s) or stated hypotheses. Implementing a flawed or inappropriate design will unequivocally lead to spurious, meaningless, or invalid results. Again, the concept of validity cannot be emphasized enough when conducting research. Validity maintains many facets (e.g., statistical validity or validity pertaining to psychometric properties of instrumentation), operates on a continuum, and deserves equal attention at each level of the research process. Aspects of validity are discussed later in this chapter. Nonetheless, the research question, hypothesis, objective, or aim is the primary step for the selection of a research design.
The purpose of a research design is to provide a conceptual framework that will allow the researcher to answer specific research questions while using sound principles of scientific inquiry. The concept behind research designs is intuitively straightforward, but applying these designs in real-life situations can be complex. More specifically, researchers face the challenge of (a) manipulating (or exploring) the social systems of interest, (b) using measurement tools (or data collection techniques) that maintain adequate levels of validity and reliability, and (c) controlling the interrelationship between multiple variables or indicating emerging themes that can lead to error in the form of confounding effects in the results. Therefore, utilizing and following the tenets of a sound research design is one of the most fundamental aspects of the scientific method. Put simply, the research design is the structure of investigation, conceived so as to obtain the “answer” to research questions or hypotheses.
The Scientific Method
All researchers who attempt to formulate conclusions from a particular path of inquiry use aspects of the scientific method. The presentation of the scientific method and how it is interpreted can vary from field to field and method (qualitative) to method (quantitative), but the general premise is not altered. Although there are many ways or avenues to “knowing,” such as sources from authorities or basic common sense, the sound application of the scientific method allows researchers to reveal valid findings based on a series of systematic steps. Within the social sciences, the general steps include the following: (a) state the problem, (b) formulate the hypothesis, (c) design the experiment, (d) make observations, (e) interpret data, (f) draw conclusions, and (g) accept or reject the hypothesis. All research in quantitative methods, from experimental to nonexperimental, should employ the steps of the scientific method in an attempt to produce reliable and valid results.
The scientific method can be likened to an association of techniques rather than an exact formula; therefore, we expand the steps as a means to be more specific and relevant for research in education and the social sciences. As seen in Figure 1.1, these steps include the following: (a) identify a research problem, (b) establish the theoretical framework, (c) indicate the purpose and research questions (or hypotheses), (d) develop the methodology, (e) collect the data, (f) analyze and interpret the data, and (g) report the results. This book targets the critical component of the scientific method, referred to in Figure 1.1 as Design the Study, which is the point in the process when the appropriate research design is selected. We do not focus on prior aspects of the scientific method or any steps that come after the Design the Study step, including procedures for conducting literature reviews, developing research questions, or discussions on the nature of knowledge, epistemology, ontology, and worldviews. Specifically, this book focuses on the conceptualization, selection, and application of common research designs in the field of education and the social and behavioral sciences.
Again, although the general premise is the same, the scientific method is known to slightly vary from each field of inquiry (and type of method). The technique presented here may not exactly follow the logic required for research using qualitative methods; however, the conceptualization of research designs remains the same. We refer the reader to Jaccard and Jacoby (2010) for a review on the various scientific approaches associated with qualitative methods, such as emergent- and discovery-oriented frameworks.
Figure 1.1 The Scientific Method
Validity and Research Designs
The overarching goal of research is to reach valid outcomes based upon the appropriate application of the scientific method. In reference to
Independent and Dependent Variables
In simple terms, the independent variable (IV) is the variable that is manipulated (i.e., controlled) by the researcher as a means to test its impact on the dependent variable, otherwise known as the treatment effect. In the classical experimental study, the IV is the treatment, program, or intervention. For example, in a psychology-based study, the IV can be a cognitive-behavioral intervention; the intervention is manipulated by the researcher, who controls the frequency and intensity of the therapy on the subject. In a pharmaceutical study, the IV would typically be a treatment pill, and in agriculture the treatment often is fertilizer. In regard to experimental research, the IVs are always manipulated (controlled) based on the appropriate theoretical tenets that posit the association between the IV and the dependent variable.
Statistical software packages (e.g., SPSS) refer to the IV differently. For instance, the IV for the analysis of variance (ANOVA) in SPSS is the “breakdown” variable and is called a factor. The IV is represented as levels in the analysis (i.e., the treatment group is Level 1, and the control group is Level 2). For nonexperimental research that uses regression analysis, the IV is referred to as the predictor variable. In research that applies control in the form of statistical procedures to variables that were not or cannot be manipulated, the IVs are sometimes referred to as quasi- or alternate independent variables. These variables are typically demographic variables, such as gender, ethnicity, or socioeconomic status. As a reminder, in nonexperimental research the IV (or predictor) is not manipulated whether it is a categorical variable such as hair color or a continuous variable such as intelligence. The only form of control that is exhibited on these types of variables is that of statistical procedures. Manipulation and elimination do not apply (see types of control later in the chapter).
The dependent variable (DV) is simply the outcome variable, and its variability is a function of IV and its impact on it (i.e., treatment effect). For example, what is the impact of the cognitive-behavioral intervention on psychological well-being? In this research question, the DV is psychological well-being. In regard to nonexperimental research, the IVs are not manipulated, and the IVs are referred to as predictors and the DVs are criterion variables. During the development of research questions, it is critical to first define the DV conceptually, then define it operationally.
A conceptual definition is a critical element to the research process and involves scientifically defining the construct so it can be systematically measured. The conceptual definition is considered to be the (scientific) textbook definition. The construct must then be operationally defined to model the conceptual definition.
An operational definition is the actual method, tool, or technique that indicates how the construct will be measured (see Figure 1.2).
Consider the following example research question: What is the relationship between Emotional Intelligence and conventional Academic Performance?
Figure 1.2 Conceptual and Operational Definitions
Internal Validity
Internal validity is the extent to which the outcome was based on the independent variable (i.e., the treatment), as opposed to extraneous or unaccounted-for variables. Specifically, internal validity has to do with causal inferences—hence, the reason why it does not apply to nonexperimental research. The goal of nonexperimental research is to describe phenomena or to explain or predict the relationship between variables, not to infer causation (although there are circumstances when cause and effect can be inferred from nonexperimental research, and this is discussed later in this book). The identification of any explanation that could be responsible for an outcome (effect) outside of the independent variable (cause) is considered to be a threat. The most common threats to internal validity seen in education and the social and behavioral sciences are detailed in Table 1.1. It should be noted that many texts do not indentify sequencing effects in the common lists of threats; however, it is placed here, as it is a primary threat in repeated-measures approaches.