Research Design Discussions

Please read.. I attached 4 discussions- Exercise 1, 2, 3, 4 All need to be done on separate page with the this book as the reference.

Edmonds, W. A., & Kennedy, T. D. (2017). An applied guide to research designs: Quantitative, qualitative, and mixed methods (2nd ed).

Exercise 1 – Research Design Validity

Discuss the importance of validity and research design.

Next, choose one type of validity (internal, external, construct, or statistical conclusion) and discuss its relevance to experimental, quasi-experimental, and non-experimental research.

 

 

Exercise 2 – Comparison Groups

Comparison groups are one of the important elements to the scientific control of a research design.

Choose one type of comparison group from the list provided in the book and expand upon how the inclusion of this type of comparison group would improve the overall validity of the findings.

 

Exercise 3 – Control Techniques

Control is an important element in any type of research.

Considering experimental research, come up with a hypothetical research scenario and apply each of the five types of control to the scenario. Use specific examples to illustrate your point.

 

 

Exercise 4 – Establishing Cause and Effect

No unread replies.No replies.

What are the major differences between experimental, quasi experimental, and non-experimental research?

Discuss the three major conditions to meet cause and effect (be sure to review your text for further information). Provide a typical experimental “weakness” that wouldn’t allow a researcher to determine cause and effect.

 

 

 

 

 

 

Chapter 1 A Primer of the Scientific Method and Relevant Components

The primary objective of this book is to help researchers understand and select appropriate designs for their investigations within the field, lab, or virtual environment. Lacking a proper conceptualization of a research design makes it difficult to apply an appropriate design based on the research question(s) or stated hypotheses. Implementing a flawed or inappropriate design will unequivocally lead to spurious, meaningless, or invalid results. Again, the concept of validity cannot be emphasized enough when conducting research. Validity maintains many facets (e.g., statistical validity or validity pertaining to psychometric properties of instrumentation), operates on a continuum, and deserves equal attention at each level of the research process. Aspects of validity are discussed later in this chapter. Nonetheless, the research question, hypothesis, objective, or aim is the primary step for the selection of a research design.

The purpose of a research design is to provide a conceptual framework that will allow the researcher to answer specific research questions while using sound principles of scientific inquiry. The concept behind research designs is intuitively straightforward, but applying these designs in real-life situations can be complex. More specifically, researchers face the challenge of (a) manipulating (or exploring) the social systems of interest, (b) using measurement tools (or data collection techniques) that maintain adequate levels of validity and reliability, and (c) controlling the interrelationship between multiple variables or indicating emerging themes that can lead to error in the form of confounding effects in the results. Therefore, utilizing and following the tenets of a sound research design is one of the most fundamental aspects of the scientific method. Put simply, the research design is the structure of investigation, conceived so as to obtain the “answer” to research questions or hypotheses.

The Scientific Method

All researchers who attempt to formulate conclusions from a particular path of inquiry use aspects of the scientific method. The presentation of the scientific method and how it is interpreted can vary from field to field and method (qualitative) to method (quantitative), but the general premise is not altered. Although there are many ways or avenues to “knowing,” such as sources from authorities or basic common sense, the sound application of the scientific method allows researchers to reveal valid findings based on a series of systematic steps. Within the social sciences, the general steps include the following: (a) state the problem, (b) formulate the hypothesis, (c) design the experiment, (d) make observations, (e) interpret data, (f) draw conclusions, and (g) accept or reject the hypothesis. All research in quantitative methods, from experimental to nonexperimental, should employ the steps of the scientific method in an attempt to produce reliable and valid results.

The scientific method can be likened to an association of techniques rather than an exact formula; therefore, we expand the steps as a means to be more specific and relevant for research in education and the social sciences. As seen in Figure 1.1, these steps include the following: (a) identify a research problem, (b) establish the theoretical framework, (c) indicate the purpose and research questions (or hypotheses), (d) develop the methodology, (e) collect the data, (f) analyze and interpret the data, and (g) report the results. This book targets the critical component of the scientific method, referred to in Figure 1.1 as Design the Study, which is the point in the process when the appropriate research design is selected. We do not focus on prior aspects of the scientific method or any steps that come after the Design the Study step, including procedures for conducting literature reviews, developing research questions, or discussions on the nature of knowledge, epistemology, ontology, and worldviews. Specifically, this book focuses on the conceptualization, selection, and application of common research designs in the field of education and the social and behavioral sciences.

Again, although the general premise is the same, the scientific method is known to slightly vary from each field of inquiry (and type of method). The technique presented here may not exactly follow the logic required for research using qualitative methods; however, the conceptualization of research designs remains the same. We refer the reader to Jaccard and Jacoby (2010) for a review on the various scientific approaches associated with qualitative methods, such as emergent- and discovery-oriented frameworks.

Figure 1.1 The Scientific Method

Figure 1

Validity and Research Designs

The overarching goal of research is to reach valid outcomes based upon the appropriate application of the scientific method. In reference to

Independent and Dependent Variables

In simple terms, the independent variable (IV) is the variable that is manipulated (i.e., controlled) by the researcher as a means to test its impact on the dependent variable, otherwise known as the treatment effect. In the classical experimental study, the IV is the treatment, program, or intervention. For example, in a psychology-based study, the IV can be a cognitive-behavioral intervention; the intervention is manipulated by the researcher, who controls the frequency and intensity of the therapy on the subject. In a pharmaceutical study, the IV would typically be a treatment pill, and in agriculture the treatment often is fertilizer. In regard to experimental research, the IVs are always manipulated (controlled) based on the appropriate theoretical tenets that posit the association between the IV and the dependent variable.

Statistical software packages (e.g., SPSS) refer to the IV differently. For instance, the IV for the analysis of variance (ANOVA) in SPSS is the “breakdown” variable and is called a factor. The IV is represented as levels in the analysis (i.e., the treatment group is Level 1, and the control group is Level 2). For nonexperimental research that uses regression analysis, the IV is referred to as the predictor variable. In research that applies control in the form of statistical procedures to variables that were not or cannot be manipulated, the IVs are sometimes referred to as quasi- or alternate independent variables. These variables are typically demographic variables, such as gender, ethnicity, or socioeconomic status. As a reminder, in nonexperimental research the IV (or predictor) is not manipulated whether it is a categorical variable such as hair color or a continuous variable such as intelligence. The only form of control that is exhibited on these types of variables is that of statistical procedures. Manipulation and elimination do not apply (see types of control later in the chapter).

The dependent variable (DV) is simply the outcome variable, and its variability is a function of IV and its impact on it (i.e., treatment effect). For example, what is the impact of the cognitive-behavioral intervention on psychological well-being? In this research question, the DV is psychological well-being. In regard to nonexperimental research, the IVs are not manipulated, and the IVs are referred to as predictors and the DVs are criterion variables. During the development of research questions, it is critical to first define the DV conceptually, then define it operationally.

conceptual definition is a critical element to the research process and involves scientifically defining the construct so it can be systematically measured. The conceptual definition is considered to be the (scientific) textbook definition. The construct must then be operationally defined to model the conceptual definition.

An operational definition is the actual method, tool, or technique that indicates how the construct will be measured (see Figure 1.2).

Consider the following example research question: What is the relationship between Emotional Intelligence and conventional Academic Performance?

Figure 1.2 Conceptual and Operational Definitions

Figure 2

Internal Validity

Internal validity is the extent to which the outcome was based on the independent variable (i.e., the treatment), as opposed to extraneous or unaccounted-for variables. Specifically, internal validity has to do with causal inferences—hence, the reason why it does not apply to nonexperimental research. The goal of nonexperimental research is to describe phenomena or to explain or predict the relationship between variables, not to infer causation (although there are circumstances when cause and effect can be inferred from nonexperimental research, and this is discussed later in this book). The identification of any explanation that could be responsible for an outcome (effect) outside of the independent variable (cause) is considered to be a threat. The most common threats to internal validity seen in education and the social and behavioral sciences are detailed in Table 1.1. It should be noted that many texts do not indentify sequencing effects in the common lists of threats; however, it is placed here, as it is a primary threat in repeated-measures approaches.

Table 3

 

Construct Validity

Construct validity refers to the extent a generalization can be made from the operationalization (i.e., the scientific measurement) of the theoretical construct back to the conceptual basis responsible for the change in the outcome. Again, although the list of threats to construct validity seen in Table 1.3 are defined to imply issues regarding cause-effect relations, the premise of construct validity should apply to all types of research. Some authors categorize some of these threats as social threats to internal validity, and some authors simply categorize some of the threats listed in Table 1.3 as threats to internal validity. The categorization of these threats can be debated, but the premise of the threats to validity cannot be argued (i.e., a violation of construct validity affects the overall validity of the study in the same way as a violation of internal validity).

 

Statistical Conclusion Validity

Statistical conclusion validity is the extent to which the statistical covariation (relationship) between the treatment and the outcome is accurate. Specifically, the statistical inferences regarding statistical conclusion validity has to do with the ability with which one can detect the relationship between the treatment and outcome, as well as determine the strength of the relationship between the two. As seen in Table 1.4, the most notable threats to statistical conclusion validity are outlined. Violating a threat to statistical conclusion validity typically will result in the overestimation or underestimation of the relationship between the treatment and outcome in experimental research. A violation can also result in the overestimation or underestimation of the explained or predicted relationships between variables as seen in nonexperimental research.

 

Design Logic

The overarching objective of a research design is to provide a framework from which specific research questions or hypotheses can be answered while using the scientific method. The concept of a research design and its structure is, at face value, rather simplistic. However, complexities arise when researchers apply research designs within social science paradigms. These include, but are not limited to, logistical issues, lack of control over certain variables, psychometric issues, and theoretical frameworks that are not well developed. In addition, with regard to statistical conclusion validity, a researcher can apply sound principles of scientific inquiry while applying an appropriate research design but may compromise the findings with inappropriate data collection strategies, faulty or “bad” data, or misdirected statistical analyses. Shadish and colleagues (2002) emphasized the importance of structural design features and that researchers should focus on the theory of design logic as the most important feature in determining valid outcomes (or testing causal propositions). The logic of research designs is ultimately embedded within the scientific method, and applying the principles of sound scientific inquiry within this phase is of the utmost importance and the primary focus of this guide.

Control

Control is an important element to securing the validity of research designs within quantitative methods (i.e., experimental, quasi-experimental, and nonexperimental research). However, within qualitative methods, behavior is generally studied as it occurs naturally with no manipulation or control. Control refers to the concept of holding variables constant or systematically varying the conditions of variables based on theoretical considerations as a means to minimize the influence of unwanted variables (i.e., extraneous variables). Control can be applied actively within quantitative methods through (a) manipulation, (b) elimination, (c) inclusion, (d) group or condition assignment, or (e) statistical procedures.

Manipulation.

Manipulation is applied by manipulating (i.e., controlling) the independent variable(s). For example, a researcher can manipulate a behavioral intervention by systematically applying and removing the intervention or by controlling the frequency and duration of the application (see section on independent variables).

Elimination.

Elimination is conducted when a researcher holds a variable or converts it to a constant. If, for example, a researcher ensures the temperature in a lab is set exactly to 76° Fahrenheit for both conditions in a biofeedback study, then the variable of temperature is eliminated as a factor because it is held as a constant.

Inclusion.

Inclusion refers to the addition of an extraneous variable into the design to test its affect on the outcome (i.e., dependent variable). For example, a researcher can include both males and females into a factorial design to examine the independent effects gender has on the outcome. Inclusion can also refer to the addition of a control or comparison group within the research design.

Group assignment.

Group assignment is another major form of control (see more on group and condition assignments later). For the between-subjects approach, a researcher can exercise control through random assignment, using a matching technique, or applying a cutoff score as means to assign participants to conditions. For the repeated-measures approach, control is exhibited when the researcher employs the technique of counterbalancing to variably expose each group or individual to all the levels of the independent variable.

Statistical procedures.

Statistical procedures are exhibited on variables, for example, by systematically deleting, combining, or not including cases and/or variables (i.e., removing outliers) within the analysis. This is part of the data-screening process as well. As illustrated in Table 1.5, all of the major forms of control can be applied in the application of designs for experimental and quasi-experimental research. The only form of control that can be applied to nonexperimental research is statistical control.

 

 

Comparison and Control Groups

The group that does not receive the actual treatment, or intervention, is typically designated as the control group. Control groups fall under the group or condition assignment aspect of control. Control groups are comparison groups and are primarily used to address threats to internal validity such as history, maturation, selection, and testing. A comparison group refers to the group or groups that are not part of the primary focus of the investigation but allow the researcher to draw certain conclusions and strengthen aspects of internal validity. There are several distinctions and variations of the control group that should be clarified.

· Control group. The control group, also known as the no-contact control, receives no treatment and no interaction.

· Attention control group. The attention control group, also known as the attention-placebo, receives attention in the form of a pseudo-intervention to control for reactivity to assessment (i.e., the participant’s awareness of being studied may influence the outcome).

· Nonrandomly assigned control group. The nonrandomly assigned control is used when a no-treatment control group cannot be created through random assignment.

· Wait-list control group. The wait-list control group is withheld from the treatment for a certain period of time, then the treatment is provided. The time in which the treatment is provided is based on theoretical tenets and on the pretest and posttest assessment of the original treatment group.

· Historical control group. Historical control is a control group that is chosen from a group of participants who were observed at some time in the past or for whom data are available through archival records, sometimes referred to as cohort controls (i.e., a homogenous successive group) and useful in quasi-experimental research.

Sampling Strategies

Oaks, CA: Sage.

I also copy the chapter readings so you can have because this professor wrote this book and he knows if we are messing around with content.