Discuss the significance of the article for working with children with disabilities and their families.

https://doi.org/10.1177/0014402919893931

Exceptional Children 2020, Vol. 86(3) 293 –309 © The Author(s) 2020 DOI: 10.1177/0014402919893931 journals.sagepub.com/home/ecx

Original Research

In the United States, school achievement is lower for English language learners (ELLs) who speak Spanish as their first language than for other minorities and Caucasian children (e.g., August & Hakuta, 1997; Hemphill & Vanneman, 2011; National Assessment of Educational Progress, 2011, 2017). In addi- tion, cross-sectional studies have shown that ELs disproportionately experience reading and math difficulties across various age levels (e.g., Kieffer, 2011; Martiniello, 2009). Com- pounding these aforementioned difficulties is that many of these ELL children with reading and math difficulties are not provided appropri- ate services. For example, national estimates reveal that ELL children are underrepresented overall in special education, meaning that a smaller percentage of these children are receiv- ing services than would be expected, given the

proportion of the overall population that they represent (e.g., Morgan & Farkas, 2016).

More important, confounds exist in the assessment of children with potential learning problems who are second-language learners. These confounds are due in part to attributing difficulties in second-language acquisition and reading or math achievement to the same cogni- tive processes as found in children with learning disabilities. In practice, these confounds may lead to ELLs being inappropriately diagnosed

893931ECXXXX10.1177/0014402919893931Exceptional ChildrenSwanson et al. research-article2020

1University of California, Riverside 2University of New Mexico

Corresponding Author: H. Lee Swanson, Educational Psychology, College of Education, University of New Mexico, Albuquerque, NM 87131, USA. Email: HLswanson@unm.edu

Can Difficulties in Language Acquisition and Specific Learning Disabilities Be Separated Among English Learners?

H. Lee Swanson1,2, Jennifer Kong1,2, Stefania D. Petcu2, and Monica Fiorella Asencio Pimentel2

Abstract This study investigated the prevalence of latent classes at risk for reading or math disabilities in elementary-age children whose first language is Spanish. To this end, children (N = 394) in Grades 1, 2, and 3 were administered a battery of vocabulary, reading, math, and cognitive measures in both Spanish and English. Three important findings occurred. First, five latent classes emerged (average achievers, poor achievers, reading disabled, English language learners, Spanish-dominant achievers) that varied in language and achievement scores. Second, probability estimates indicated that 10% of the total sample was at risk for learning disabilities (below cutoff score), and approximately 40% of the sample reflected a language acquisition group not at risk for academic difficulties. Finally, the best model for correctly predicting the odds of latent classes differing from average achievers included English measures of short-term memory, naming speed, and the executive component of working memory. The results support the notion that statistically distinct latent classes emerge under the umbrella of children identified as English learners and that children at risk for specific learning disabilities can be separated among a heterogeneous sample of children who are acquiring English as a second language.

 

https://us.sagepub.com/en-us/journals-permissions
https://ec.sagepub.com

 

294 Exceptional Children 86(3)

with learning disabilities and placed in special education. The opposite situation is also true: that children who are at potential risk for learn- ing disabilities are being overlooked and not being provided intervention. To circumvent some of these problems, it is necessary to iden- tify the processes in children with learning disabilities from other processes related to sec- ond-language acquisition. These issues under- score the need for better tools and methods for accurately identifying ELL children with seri- ous reading and math difficulties. (The terms “ELLs” and “emerging bilinguals” are used interchangeably throughout the article.)

Confounds are due in part to attributing difficulties in second- language acquisition and reading or math achievement to the same cognitive processes as found in

children with learning disabilities.

This study has two purposes. The first pur- pose was to determine if ELL children at risk for specific learning disabilities in reading or math reflect a discrete latent class of learners. Currently, children at risk for learning disabili- ties in reading or math have been defined by performing below a cutoff score on a norm- referenced standardized reading or math test (e.g., Branum-Martin et al., 2013; Geary et al., 2012; Lipka et al., 2006). However, this selec- tion process of determining children as at risk for learning disabilities has been criticized because of a reliance on artificial cutoff scores (e.g., Branum-Martin et al., 2013; Cirino et al., 2015). These artificial standards have also been exacerbated when defining risk status among ELL students because such children are not tested in their first language (e.g., Peña et al., 2016). This is unfortunate because it is commonly assumed that a certain threshold within one’s native language is necessary before the cognitive processes and academic performance in the second language can be assessed (e.g., Cummins, 1979).

To address some of these issues, method- ological advances contribute to our under- standing of children’s academic skills as it

relates to ELL children, such as modeling the development of discrete processes based on the latent class analysis (LCA; e.g., Collins & Lanza, 2010; Muthén, 2006). LCA is a statis- tical method used to identify subgroups of individuals characterized by similar multidi- mensional patterns of responses (e.g., Collins et al., 2000). In one sense, LCA is a categori- cal analog to factor analysis. Instead of defin- ing attributes to a complex covariance structure, LCA posits unobserved classes to explain complex associations in a multidimen- sional contingency table. Studies that involve the analysis of unobserved classes from a het- erogeneous sample are sometimes referred to as mixture models (e.g., Muthén, 2006). A rationale for using latent class or mixture mod- eling is that although reading or math skills can be represented as a continuous outcome vari- able, the sample may be composed of different groups (or classes) of individuals. The advan- tage of LCA when compared to other proce- dures, such as cluster analysis, is that it offers a probabilistic model of the distribution of latent classes in the data. In this study, we test the notion that discrete latent classes or mix- tures representing different states of academic proficiency exist in ELL children who may be identified as at risk or not at risk.

The second purpose of this study was to determine the cognitive processes that corre- late with the performance of ELL children at risk for achievement difficulties. Current pro- cedures to identify children with potential learning disabilities in reading or math assume that such children experience cognitive con- straints that impede their ability to perform efficiently on achievement measures (e.g., Geary et al., 2017; Lesaux et al., 2006). Thus, on the assumption that a discrete subgroup of ELL children at risk for learning disabilities in reading or math emerges, it is important to know the cognitive processes associated with these risk groups. One of the most-often- referred-to cognitive processes underlying both reading and math disabilities is working memory (WM; Cowan, 2014; David, 2012; Peng et al., 2016, 2018; Swanson & Beebe- Frankenberger, 2004), which has also been related to achievement difficulties in emerging

 

 

Swanson et al. 295

bilinguals (e.g., Engle de Abreu, 2011; Engle de Abreu & Gathercole, 2012; Linck et al., 2013; Swanson et al., 2006, 2015). Although the association between WM and reading or math has been established in the literature, the processes of WM that underlie predictions of reading or math performance are unclear (see Peng et al., 2016, 2018, for review). Some studies have suggested that the storage com- ponent of WM (referred to as verbal short- term memory, or STM) plays a major role in academic performance. Other studies have noted that academic difficulties are tied to the executive component of WM (e.g., Peng et al., 2016, 2018; Swanson et al., 2015).

In summary, the purpose of this study was to identify whether ELL children at risk for learn- ing disabilities reflect a latent class. The study determined if this potential latent class could be differentiated in terms of severity of aca- demic deficiencies from other latent classes and whether this differentiation reflected quali- tatively different cognitive processes. To extend the literature in these areas, the study sought to answer two questions:

1. Can a latent classification of ELL chil- dren at risk for reading or math be identified within a heterogeneous sample of ELLs?

Traditionally, as indicated earlier, children at risk for learning disabilities in reading or math are operationally defined by performing below a cutoff point on a norm-referenced achievement measure (studies vary from the 11th to 25th percentile on norm-referenced standardized achievement measures; e.g., Murphy et al., 2007; Swanson et al., 2006; Vukovic & Lesaux, 2013). The present study determines the probability of identifying a latent class of participants at risk for learning disabilities using the 16th percentile (85 stan- dard score) as a cutoff point within a sample that includes a test battery of math, reading, and cognitive abilities. This cutoff was consid- ered a conservative cutoff point because it captures performance below what is consid- ered the average range in normative standard score distributions. As mentioned, LCA is a

model-based clustering approach that derives clusters using a probabilistic model that describes the distribution of data. Therefore, instead of finding clusters of children with low academic performance, LCA describes the dis- tribution of the data based on a model that assesses probabilities that certain cases are members of certain latent classes. Thus, with the goodness-of-fit indices, it is possible to test whether a “latent structure” underlies the data.

A further refinement in the sample selection of ELL children at risk for learning disabilities includes making sure that such children per- form above the cutoff scores (>16th percen- tile) on vocabulary measures in the first language (L1). This refinement is necessary to establish that risk status resides in the aca- demic domain and not in language (i.e., L1) per se. Likewise, further refinement in sample selection includes establishing that such chil- dren’s academic difficulties are not due to gen- eral intellectual difficulties or biased aptitude measures (e.g., Ferrer et al., 2010; Lohman et al., 2008; Lohamn & Gambrell, 2012).

2. Do specific cognitive measures pre- dict latent class membership?

On the basis of the aforementioned discussion, we determine if cognitive processes related to language acquisition (e.g., phonological stor- age or STM) can be separated from children at risk for learning disabilities. Clearly, both groups may share some processing difficul- ties, but one or two processes may be particu- larly helpful toward identifying ELL children with potential risk for learning disabilities in reading or math versus children experiencing difficulties acquiring English as a second lan- guage (L2). For example, it is commonly assumed that deficits in the phonological sys- tem (phonological storage) have been attrib- uted to reading disabilities in English (e.g., Stanovich & Siegel, 1994) and Spanish (e.g., Gonzàlez & Valle, 2000). Studies that are more recent have found that executive processes, primarily those executive processes related to WM, are also significantly related to L2 read- ing and math performance (e.g., Swanson et al., 2015, 2018). For this study, WM is

 

 

296 Exceptional Children 86(3)

defined as consisting of a limited-capacity sys- tem related to the preservation of information while simultaneously processing other infor- mation (Baddeley & Logie, 1999). The system reflects controlled attention because informa- tion to be recalled is presented in the context of competing information.

In addition to STM and WM, mental oper- ations related to naming speed and inhibition of the competing language may also play an important role in ELL children’s academic performance (e.g., Bonifacci et al., 2011; Cooper, 2012). For example, letter- and digit- naming speed may underlie the general pat- tern of cognitive difficulties among some emerging bilinguals. Thus, our predictions are that processes related to executive processing (WM, inhibition) or the phonological storage system (STM) play a unique role in predicting a latent class of children at risk for learning disabilities in reading or math.

In summary, the present study tested whether various latent classes emerge related to reading or math skills among ELL children. Measures used to classify children at risk for learning dis- abilities in either reading or math included norm-referenced tests of reading, math, and lan- guage in both Spanish and English. To enhance our focus beyond academic and vocabulary measures, we also include as part of the classifi- cation battery measures of classroom behavior (attention deficit hyperactivity disorder) and nonverbal reasoning (fluid intelligence). Specif- ically, we expected to find latent classes of chil- dren at risk for achievement difficulties (i.e., reading or math disabilities), children not at risk for achievement difficulties who were proficient in both languages (English and Spanish), and children not at risk who are more proficient in their first language (Spanish) than in their sec- ond language (English).

Method

Participants

Three hundred and ninety-four (N = 394) stu- dents in Grades 1 (n = 155), 2 (n = 129), and 3 (n = 110) from two large school districts in the southwestern United States participated in

this study. The children were designated as ELL or emerging bilinguals by their school and were selected from 30 classrooms.1 These children were selected from urban schools with a high poverty representation (over 98% of the children participated in a full or reduced federal lunch program) as well as a high His- panic representation (>95 %). The final sam- ple included 192 boys and 202 girls who returned signed consent forms. School records indicated that the children’s primary home spoken language was Spanish (>80%). All children were selected from dual-language classrooms in which instruction was provided in both English and Spanish. No significant differences in gender representation emerged across the grades, χ2(2, 394) = 2.88, p = .23.

Measures Used for Identifying Latent Classes

The study included group and individual administrations of a battery of tests. The series of tests were counterbalanced into one of four presentation orders. No Spanish and English versions of the same test (except for the Expressive One-Word Picture Vocabulary Test, Spanish-Bilingual Edition [EOWPVT- SBE]; Brownell, 2001) were presented simul- taneously. All participants were administered both English and Spanish versions of each measure by bilingual graduate students and staff researchers. The mean raw scores and reliabilities for all measures for the current sample described next are provided in the online supplement to this article (see Supple- ment Table 4). Because the normed standard- ized measures for establishing the latent class are commercially available, as is information on their validity and reliability, they are briefly reviewed here. Additional detail is provided later for the experimental cognitive measures.

Vocabulary: Receptive and Expressive

The Peabody Picture Vocabulary Test (PPVT; Dunn & Dunn, 2007) was administered in English. In this task, children were presented

 

 

Swanson et al. 297

with four pictures and were asked to select the picture that matched the word read aloud in English. The Test de Vocabulario en Ima- genes (TVIP) was also administered. This measure is similar to the PPVT in the presen- tation and administration, except that words are read aloud in Spanish (Dunn et al., 1986). The EOWPVT-SBE (Brownell, 2001) was used as a measure of English- and Spanish- speaking vocabulary. The sample Cronbach’s alpha reliabilities for the receptive and expressive vocabulary measures were .96 and .95 for English and .92 and .96 for Spanish measures, respectively.

Reading: Word Identification and Passage Comprehension

The Woodcock-Muñoz Language Survey- Revised (WMLS-R) established a norm-refer- enced reading level in English and Spanish (Woodcock-Muñoz et al., 2005). The WMLS- R Spanish and English Word Identification and Passage Comprehension subtests were administered. The sample Cronbach’s alpha reliabilities for the word identification and comprehension subtests were .95 and .90 for English and .89 and .80 for Spanish measures, respectively.

Math: Calculation and Word Problems

The Calculation and Applied Math Problem Solving subtest from the Woodcock-John- son III (Woodcock et al., 2001) was admin- istered for the English presentation and the Calculation and Problemas Aplicados from the Batería III Woodcock-Muñoz (Muñoz- Sandoval et al., 2005) was administered to establish normed-referenced math levels in Spanish. Both of these subtests are individ- ually administered and assess children’s early mathematical operations (e.g., count- ing, addition, and subtraction) through practical problems. The sample Cronbach’s alpha reliabilities for the calculation and applied problems subtests were .78 and .78 for English and .83 and .71 for Spanish mea- sures, respectively.

Fluid Intelligence and Attention

Fluid intelligence. Fluid intelligence was assessed by administering the Raven Colored Progressive Matrices test (RCMT; Raven, 1976). The RCMT is commonly used to tap fluid intel- ligence because of its brevity in administra- tion and because of its high correlation with other nonverbal intelligence measures that are assumed to tap reasoning, thinking, or the ability to acquire new knowledge (referred to as fluid intelligence). The sample Cronbach’s alpha was .79.

Attention. The Conners’ Teacher Ratings Scales– Revised: Short Form (CTRS-R:S; Conners, 1997) were administered to evaluate problem behaviors by obtaining ratings from teachers. The homeroom teacher was selected for each child and was asked to complete the CTRS- R:S. The primary measure for this study was the ADHD index.

Cognitive Measures Used for Determining Correlates of Latent Class Membership

The cognitive measures assumed related to the latent classification assessed the storage of phonological information (STM, naming speed) and executive processing (inhibition or random generation, the executive component of WM). The convergence of the measures for the English and Spanish versions was estab- lished in an earlier study (see Swanson et al., 2015; Swanson, Kudo, et al., 2019, for further discussion), and a full description of each cognitive measure is provided in Swanson et al. (2015) and Swanson, Kong, et al. (2019).

Phonological Storage

STM. STM storage was measured using three tasks. The Forward Digit Span subtest of the Wechsler Intelligence Scale for Children– Third Edition (Wechsler, 1991) assessed STM because it was assumed that forward digit spans presumably involved a subsidiary mem- ory system (the phonological loop). The Word Span task was previously used by Swanson

 

 

298 Exceptional Children 86(3)

and Beebe-Frankenberger (2004) and assessed the children’s ability to recall increasingly large word lists (a minimum of two words to a maximum of eight words). The Phonetic Mem- ory Span task assessed the children’s ability to recall increasingly large lists of nonsense words (e.g., “des,” “seeg,” “seg,” “geez,” “deez,” “dez”) ranging from two to seven words per list. The sample Cronbach’s alpha reliabilities for digit span, word span, and pho- netic span were .82, .66, and .49 for English measures and .70, .75, and .50 for Spanish measures, respectively.

Naming speed. The Comprehensive Test of Phonological Processing (Wagner, Torgesen, & Rashotte, 2000) Rapid Digit and Rapid Letter Naming subtests were administered to assess speed in recalling numbers and letters in an English and a Spanish version. The sample Cronbach’s alpha reliabilities for let- ters and numbers subtests were .96 and .95 for English and .96, and .94 for Spanish mea- sures, respectively.

Executive Processing

Central executive. Three complex span mea- sures (tasks that include both a process and storage question) and an updating task were administered. The Conceptual Span, Listening Sentence Span, Digit Sentence Span, and Updating tasks were administered in English and Spanish to capture the executive compo- nent of WM (tasks described in detail in Swan- son et al., 2015). The WM tasks required children to hold increasingly complex informa- tion in memory while simultaneously respond- ing to a question about the task. Because WM tasks were assumed to tap a measure of con- trolled attention referred to as updating, an experimental updating task was also adminis- tered. The sample Cronbach’s alpha reliabili- ties for conceptual span, listening span, digit span, and update task were .84, .85, .52, and .80 for English measures and .83, .86, .52, and .70 for Spanish measures, respectively.

Visual-spatial WM. This component of WM was measured using two tasks (see Swanson

& Beebe-Frankenberger, 2004, for review of these tasks). The Mapping and Directions Span task assessed whether the children could recall a visual-spatial sequence of directions on a map with no labels. The sample Cron- bach’s alpha reliabilities for visual matrix and mapping/directions measures were .95 and .80, respectively.

Inhibition. The Random Number and Random Letter Generation tasks were administered to assess inhibition. Children were first asked to write, as quickly as possible, numbers (or let- ters) in a nonrandom sequential order to estab- lish a baseline. They were then asked to write numbers (or letters) as quickly as possible, out of order, in a 30-s period. Scoring included an index for randomness, information redun- dancy, and percentage of paired responses to assess the tendency of participants to suppress response repetitions. The sample Cronbach’s alpha reliabilities of the letters and numbers were .80 and .77 for English measures and .81 and .82 for Spanish measures, respectively.

Cutoff Point

To reduce the number of manifest variables, mean standard scores of subtests of vocabulary (receptive, expressive), reading (word identifi- cation, comprehension), and math (calculation, applied problems) were the primary measures. The manifest variables (vocabulary, reading, math, fluid intelligence, and attention) to deter- mine discrete groups were dummy coded as reflecting normative score at or below the 16th percentile (1 = at or below the 16th percentile, 2 = above the 16th percentile). The 16th per- centile (85 standard score) was based on the normative scores from the standardized vocab- ulary, math, reading, and fluid intelligence measures. The CTRS-R:S was in T-scores with high scores representing higher levels of inat- tention, and therefore the 16th percentile was a T-score of 63.

Procedures

Ten bilingual graduate students or research assistants trained in test administration tested

 

 

Swanson et al. 299

all participants in their schools. One session of 45 to 60 min was required for small-group test administration, and two sessions of 45 to 60 min was required for individual test admin- istration. Test administration was counterbal- anced to control for order effects.

Statistical Analysis

To evaluate the model fit, and because LCA is an exploratory analysis, a series of models was fit, varying the number of latent classes between one and seven (Nylund et al., 2007; see Masyn, 2013, for a comprehensive review). A combination of statistical indica- tors and substantive theory were used to decide on the best-fitting model. We used Mplus (Muthén & Muthén, 2012) and SAS (Lanza et al., 2011) software to examine the manifest variables and determine the number of latent classes. The models with different numbers were compared using information criteria (i.e., Bayesian information criterion [BIC], Akaike information criterion [AIC], and adjusted BIC). Lower values on these fit statistics indicated a better model fit. Statis- tical model comparisons included likelihood ratio tests: the Lo-Mendell-Rubin test (LMR) and the bootstrap likelihood ratio test (BLRT). Both statistical procedures compared the improvement between neighboring class mod- els (i.e., comparing models with three vs. four classes, four vs. five, etc.) and provided p val- ues. P values were used to determine if there

was a statistically significant improvement in fit for the inclusion of one more latent class. A nonsignificant p value indicated for a k class that the previous k class with a significant p value fit the data better. Among the informa- tion criterion measures, the BIC is generally preferred, as is the BLRT for statistical model comparisons (Nylund et al., 2007). Table 1 shows the indices for the model fit.

Cognitive measures were reduced to latent constructs based on an earlier study (Swanson et al., 2015; Swanson, Kong, et al., 2019). Converting the measures to latent constructs eliminated measurement error and allowed for a focus on shared variance rather than isolated task variance. Latent scores were computed by multiplying the z score of the target vari- able by the standardized factor loading weight based on the total sample (see Nunnally & Bernstein, 1994, p. 508, for calculation proce- dures). Latent variables were specified as indicators of speed (naming speed for num- bers and letters), inhibition (random genera- tion of numbers and letters), STM (Digit Forward Span, Word Span, and Phonetic Memory Span), executive processing (Con- ceptual Span, Listening Sentence Span, Digit Sentence Span, updating), and visual-spatial WM (matrix, mapping and directions).

Finally, we used a multilevel logistic model, via SAS PROC GLIMMIX software, to analyze differences between latent classes. The reference group was the latent class con- sidered as average achievers (LC1).

Table 1. Fix Indices for Seven Latent Class (LC) Models.

Variable LC1 LC2 LC3 LC4 LC5 LC6 LC7

Log-likelihood −1616.41 −1523.45 −1498.3 −1478.96 −1467.77 −1457.87 −1454.9 AIC 474.3 306.38 274.08 253.41 249.02 247.21 259.27 BIC 506.11 373.97 377.46 392.58 423.98 457.96 505.8 CAIC 514.11 390.97 403.46 427.58 467.98 510.96 567.8 Adjusted BIC 480.73 320.03 294.96 281.53 284.37 289.79 309.08 Entropy 1 0.78 0.67 0.72 0.79 0.75 0.78 Degrees 247 238 229 220 211 202 193 LMR (p value) — 0 .056 .049 .70 .53 .09 BLRT (p value) — 0 0 0 .012 .051 .17

Note. Bold indicates the best fitting model. AIC = Akaike information criterion; BIC = Bayesian information criterion; CAIC = Bozdon AIC; LMR = Lo-Mendell-Rubin test; BLRT = bootstrap likelihood ratio test. CAIC and adjusted BIC corrected for sample size.

 

 

300 Exceptional Children 86(3)

Results

LCA

The indices for determining the number of latent classes are reported in Table 1. Given the indices reported in Table 1, the five- and six-class models were studied for interpret- ability. Both the LMR and BLRT yielded non- significant p values for the six-class model, indicating that the five-class model provided an excellent fit to the data. The BIC was lower for the five- than for the six-class model. In addition, adequate sample proportionality and item probabilities for the five-class model were more easily interpreted than for the six- class model. The entropy for the five-class model was .79, an acceptable value (Nylund et al., 2007). The online supplement to this article reports tables related to the proportion

of the sample in each latent class (gamma esti- mates) as well as the probabilities (rho esti- mates) for each measure (manifest variable) for each response category as a function of each latent class for the total sample (Supple- ment Table 2). Also reported in the supple- ment are the item probabilities for performance at or under the cutoff threshold of the 16th percentile (standard score of 85).

Sample Distribution of Latent Classes

Means and standard deviations for each of the normed classification measures as a function of the five latent classes are shown in Table 2. Effect sizes (ESs) comparing each latent class across all measures are shown in Table 3, and those ESs at or greater than .80 were considered

Table 2. Normative Descriptive Scores as a Function of Latent Class (LC).

LC1 (n = 224) LC2 (n = 13) LC3 (n = 30) LC4 (n = 66) LC5 (n = 61)

Variable M SD M SD M SD M SD M SD

Manifest variablesa

E vocabulary 105.56 14.50 82.14 5.38 94.50 11.89 72.96 8.65 79.74 15.96 S vocabulary 83.74 14.07 81.87 9.80 73.30 9.00 90.68 13.31 82.85 11.6 E reading 105.70 12.13 77.36 12.37 85.74 13.42 98.24 9.34 78.52 10.12 S reading 107.16 12.13 78.23 9.02 79.30 5.09 114.22 12.62 100.91 13.99 E math 103.55 10.14 77.84 8.52 99.85 8.69 95.95 9.54 87.73 11.99 S math 100.28 9.34 80.25 6.18 90.22 13.42 103.54 8.90 94.00 10.67 Fluid

intelligence 105.63 14.78 87.45 8.50 97.90 15.74 93.33 16.18 88.32 14.73

Inattentionb 50.05 9.26 59.30 3.87 54.68 11.11 48.60 7.81 56.15 11.54 Correlated variablesc

E STM 0.49 1.54 −0.67 1.30 −1.37 1.36 −0.26 1.63 −0.88 1.38 S STM 0.36 1.61 −0.65 1.42 −1.43 1.59 −0.01 1.71 −0.51 1.4 E speed −0.51 1.00 1.57 1.61 0.88 2.13 0.27 1.55 1.07 2.19 S speed −0.16 1.28 1.04 2.53 1.19 2.42 −0.48 1.09 0.50 1.74 E inhibition 0.14 0.97 0.09 0.78 −0.24 0.98 −0.18 1.05 −0.15 0.82 S inhibition 0.10 0.72 −0.25 0.82 −0.29 0.64 −0.20 0.71 0.03 0.59 E exec WM 0.50 1.48 −0.86 0.51 −0.61 1.00 −0.68 0.93 −0.75 1.09 S exec WW 0.20 1.61 −1.33 1.26 −1.37 1.11 0.47 1.68 −0.48 1.32 Visual-spatial

WM 0.21 1.16 −1.23 0.50 −0.30 0.87 0.03 1.27 −0.37 1.01

Scholarly Writing & Styles

Scholarly writing is objective, addresses key stakeholders, clearly states a problem(s), provides the significance of the stated problem(s), and is logical and organized. The aim of scholarly writing is to make an argument that is supported with evidence. The peer-reviewed journals you have found in your library searches for literature are examples of scholarly writing. To be an effective change agent and a leader in the field of education, it is crucial that you have well developed scholarly writing abilities.

As you have explored your selected case study’s documents, you have read a variety of types of writing that differ from scholarly writing. For example, you may have read blog posts, letters to the editor, newspaper articles, and government reports.

Reflect on the different types of writing used in the resources that you identified in the Looking Ahead at the end of Assignment 3. Which resources reflected the characteristics of scholarly writing, and which did not? Your role in education will likely require you to not only read a variety of types of writing, but to use a variety of writing types in your own communications. As you may have noticed in the case study documents and the resources you have been exploring, the type of writing you use depends on your audience and the purpose of your communication.

For this Assignment, create a simple message related to the case study. In addition, identify three different audiences to which to communicate the message. These audiences may be extracted from the case study documents, or you may identify different audiences appropriate for the message.

Consider how you might convey the same message in writing to the three different audiences for your case study.

Issues in K-12 Education Case Study Document 6

English Language Learner Instruction and Twenty-First Century Education

This is a simulated article from a leading educational journal. The target audience is K-12 teachers, administrators, as well as prospective teachers still studying. It is about standards- based education in the twenty-first century and its impact on English language learners (ELLs). The author is an instructor who is both enthusiastic and anxious about the implementation of rigorous new academic expectations for ELLs.

English language learners (ELLs) are defined as students who learn English as a non-native language. As an ELL instructor, I know firsthand that students and instructors face unique challenges related to teaching and learning complex academic skills, in addition to mastering the English language. Standards-based instruction offers opportunities to incorporate ELLs into the general education population by diminishing the achievement gap between ELL students and those for whom English is their first language. However, uniform academic standards also present a great challenge (Maxwell 2012).

Although ELL students belong to one common category, that of non-native speakers, they are far from a homogeneous group. Not only do they speak many different first languages, but they come from different cultural backgrounds and possess widely different academic skills. ELL students are typically categorized on their need for language instruction, rather than their academic ability. In addition to having ELL students with different levels of English, they are often placed in classes with native English speakers. I’ve witnessed the resulting challenges. We teachers try to achieve the delicate balance between appreciating the individual talents and needs of students while providing an entire classroom with standards-based instruction.

One important dilemma in the education of ELLs centers on the difference between academic English and social English. Social English is essential for everyday, basic communication. Academic language is the language of formal texts and scholarly discourse. Academic language involves precise terminology rather than vague, general words or slang. Academic vocabulary is often more abstract than social or survival vocabulary. Academic discourse requires mastery of grammar and usage.

In the past, social English was typically the main focus of instruction for beginning ELLs (Colorin Colorado 2014). Students were not introduced to academic English until they were proficient in social English. This approach made it difficult for many students to develop grade-appropriate content knowledge in core academic subjects because they lacked the vocabulary necessary for comprehension and expression (Illinois State University 2014).

Today, there is an increased emphasis on preparing all students to become college and career ready. Academic standards are rich and rigorous. One specific area of emphasis is instruction in “Tier 2” academic vocabulary, defined as general academic words that are used frequently across different subject and content areas (Cruz 2004).

© 2014 Laureate Education, Inc. Page 1 of 3

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

There is much that English language educators can do to give our students the tools they will need to acquire these more rigorous academic skills and to perform well on standardized assessments. We can teach Tier 2 academic vocabulary. We can work with other content experts to help students master content-specific vocabulary and knowledge. We can help students distinguish between casual, social speech and the more formal language of college and careers. We can teach the language of higher-order thinking skills, such as critical thinking and problem solving (Maxwell 2013).

For example, one method of incorporating social and academic language into a lesson is to present students with two documents: one using formal language and the other informal. The content should be similar and should allow students to identify the differences in language, presentation, and purpose.

Helping a student achieve English language proficiency, while simultaneously delivering discipline-specific instruction presents challenges to educators. Students do not learn to communicate in carefully segmented blocks, but in a fluid, ongoing process that develops over time (Association for Supervision and Curriculum Development 2012). We, as educators, need to carefully consider different strategies to adapt a standards-based education to accommodate such a wide range of abilities and understanding.

The shift toward heightened expectations of ELL students is a welcome reform. The goal of immersing ELL students in academic content as early as possible is laudable; but it is important to accommodate these students, and for educators to develop assessments that accurately reflect the abilities of ELLs. It is only then that the achievement gap can be identified, solutions can be discussed, and new strategies can be implemented.

If our state adopts rigorous and broad standards, we must support students and educators in meeting them. According to a 2011 American Community Survey, the number of Americans who speak a language other than English at home “is now 20.8 percent—fully one-fifth of all people living in the U.S” (Badger 2013). The implementation of more rigorous standards must be accompanied by the allocation of additional resources. Only then will we be able to prepare all of our students, whatever their first language, to become highly functioning members of our knowledge society.

References

Association for Supervision and Curriculum Development. (2012). Fulfilling the promise of the Common Core State Standards. Retrieved from http://webcache.googleusercontent.com/search?q=cache:lCguMXWlKL4J:educore.ascd.org/res ource/download/get.ashx%3Fguid%3D1d60f46d-b786-41d1-b059- 95a7c4eda420+&cd=1&hl=en&ct=clnk&gl=us

Badger, E. (2013, August 6). Where 60 million people in the U.S don’t speak English at home. The Atlantic Cities. http://www.theatlanticcities.com/arts-and-lifestyle/2013/08/geography- americas-many-languages/6438/

© 2014 Laureate Education, Inc. Page 2 of 3

 

http://www.theatlanticcities.com/arts-and-lifestyle/2013/08/geography
http://webcache.googleusercontent.com/search?q=cache:lCguMXWlKL4J:educore.ascd.org/res

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

ColorinColorado. (2014). Academic language and English language learners. Retrieved from http://www.colorincolorado.org/webcasts/academiclanguage/

Cruz, M. C. (2004). Can English language learners acquire academic English? Retrieved from http://www.csun.edu/~krowlands/Content/Academic_Resources/Language/About%20Language/ Cruz-ELL%20Academic%20Language.pdf

Illinois State University. (2014). Session 4: Academic vocabulary. Retrieved from http://webcache.googleusercontent.com/search?q=cache:yskdQMgepukJ:education.illinoisstate .edu/downloads/casei/AV-3-2-14%2520academic-vocabulary-6-12-ela-content-area- teachers.ppt+&cd=1&hl=en&ct=clnk&gl=us

Lu, A. (2014). States reconsider Common Core tests. The Pew Charitable Trusts. Retrieved from http://www.pewstates.org/projects/stateline/headlines/states-reconsider-common-core- tests-85899535255

Maxwell, L. (2012, April 23). Language demands to grow for ELLs under new standards. Education Week. Retrieved from http://www.edweek.org/ew/articles/2012/04/25/29cs- ell.h31.html

Maxwell, L. (2013, January 15). Three districts test model Common-Core unit for ELLs. Education Week. Retrieved from http://www.edweek.org/ew/articles/2013/01/16/17ellstanford_ep.h32.html

Murphy, P., Regenstein, E., & McNamara, K. (2012). Putting a price tag on the Common Core: How much will smart implementation cost? Thomas B. Fordham Institute. Retrieved from: http://webcache.googleusercontent.com/search?q=cache:zDdlil7L9s4J:files.eric.ed.gov/fulltext/ ED532509.pdf+&cd=2&hl=en&ct=clnk&gl=us

National Conference of State Legislatures. (2014). Costs associated with the Common Core State Standard. Retrieved from http://www.ncsl.org/research/education/common-core-state- standards-costs.aspx

National Council of Teachers of English. (2008). English language learners. Retrieved from http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&sqi=2&ved=0CCoQFjAA &url=http%3A%2F%2Fwww.ncte.org%2Flibrary%2FNCTEFiles%2FResources%2FPolicyResea rch%2FELLResearchBrief.pdf&ei=XHEOU7vTObLQsATMyoGAAg&usg=AFQjCNFlbkkyWn55- dRTIlTNW5Awb2-_XA&sig2=n6EKifqcao1jxwYXoehKbw&bvm=bv.61965928,d.cWc (ELL)

The National Institute for Health and Human Development. (2005). Autism overview, what we know. Retrieved from http://eric.ed.gov/?id=ED486273

Plank, D. (2011). ELL assessment: One size does not fit all. Education Week. Retrieved from http://www.edweek.org/ew/articles/2011/08/31/02plank.h31.html

Robertson, K. (2006). Increasing academic language knowledge for English language learner success. Retrieved from http://www.colorincolorado.org/article/13347/

© 2014 Laureate Education, Inc. Page 3 of 3

 

http://www.colorincolorado.org/article/13347
http://www.edweek.org/ew/articles/2011/08/31/02plank.h31.html
http://eric.ed.gov/?id=ED486273
http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&sqi=2&ved=0CCoQFjAA
http://www.ncsl.org/research/education/common-core-state
http://webcache.googleusercontent.com/search?q=cache:zDdlil7L9s4J:files.eric.ed.gov/fulltext
http://www.edweek.org/ew/articles/2013/01/16/17ellstanford_ep.h32.html
http://www.edweek.org/ew/articles/2012/04/25/29cs
http://www.pewstates.org/projects/stateline/headlines/states-reconsider-common-core
http://webcache.googleusercontent.com/search?q=cache:yskdQMgepukJ:education.illinoisstate
http://www.csun.edu/~krowlands/Content/Academic_Resources/Language/About%20Language
http://www.colorincolorado.org/webcasts/academiclanguage

Technology And Online Learning Paper Assignment

Overview

 

As the classroom environment expands and shifts to accommodate student and societal needs, online learning has become an increasingly important topic in education. Effective educators must be comfortable with this type of instruction to ensure the continuity of instruction when residential instruction is not possible or when students learn better outside of the school building.

 

Instructions

 

For this assignment, you will write a 1,000-word research paper in current APA format that describes the principles of online learning and online instructional strategies. This paper will contain five sections:

 

  1. Introduction: Describe online learning, the necessity of online learning, and the technological skills required for effective online teaching.

 

  1. Principles of Online Learning: Identify and describe at least three research-based principles of online learning. Consider what is required to deliver an effective lesson in an online format and consider what is required for a student to effectively learn in an online environment. At least two citations from scholarly sources should be included in this section.

 

  1. Online Instructional Strategy 1: Identify and describe one research-based strategy for effective online instruction. Use scholarly resources to support the use of this strategy in an online educational environment. At least two citations should be included in this section. Emphasize the potential of this strategy to improve teaching, learning, research, and/or communication.

 

  1. Online Instructional Strategy 2: Identify and describe a second research-based strategy for effective online instruction. Use scholarly resources to support the use of this strategy in an online educational environment. At least two citations should be included in this section. Emphasize the potential of this strategy to improve teaching, learning, research, and/or communication.

 

  1. Conclusion: Summarize the principles of online learning and the two online instructional strategies. Use scriptural integration to emphasize the importance and/or the benefits of online learning.

 

A title page and a reference page are required for this assignment.

Are Bianca’s Goals SMART?

Learning how to develop goals as part of the IEP is required for special education professionals. IEP goals should be “SMART” and based on good educational practice.

Post an initial post addressing each of the following:

  • In one paragraph, assess the effectiveness of Mr. Franklin and Mrs. Mills’ practices for measuring and evaluating Bianca’s progress. Use support from at least two scholarly sources, one of which may be the course textbook.
  • In one paragraph explain Bianca’s present levels of performance in a selected content area. You will need to refer to the Instructor Guidance for specific information.
  • In a list, develop at least three recommended goals for Bianca based on her current performance level and identified areas of need. Be sure that your suggested goals are SMART goals and that they include the five required components which makes a goal SMART. Include a justification for why you recommend these goals, drawing support from at least two scholarly sources, one of which may be the course textbook.

 

Text

Cohen, L. & Spenciner, L. (2009) Teaching students with mild and moderate disabilities: Research-based practices (2nd ed.). Upper Saddle River, NJ: Pearson Publication.

Article

American Psychological Association. (2002). Ethical principles of psychologists and code of conduct. American Psychologist, 57(12), 1060-1073. doi:10.1037/0003-066X.57.12.1060

Bremer, C. D., Kachgal, M., & Schoeller, K. (2003). Self-determination: Supporting successful transition (Links to an external site.)Research to Practice Brief: Improving Secondary Education and Transition Services Through Research, 2(1). Retrieved from http://www.ncset.org/publications/viewdesc.asp?id=962

Multimedia

ProfKelley. (2010). Screencasting – Creating a Narrated PowerPoint with Jing (Links to an external site.) [Video File]. Retrieved from http://www.youtube.com/watch?v=npMuCWOvmVE

Multimedia

Lesson Plan – Elementary download

Lesson Plan – Secondary download

Websites

Bridges 4 Kids. (2015, July). Evidence Based Practice (Links to an external site.). Retrieved from http://www.bridges4kids.org/articles/2006/8-06/cec8-06.html

Classroom-Assessment Techniques: A Video Collection – Education Week Teacher (Links to an external site.). (n.d.). Retrieved from http://www.edweek.org/tm/articles/2014/03/05/ndia_cm_videos.html

Common Core State Standards Initiative (Links to an external site.). (n.d.). Retrieved from http://www.corestandards.org/

How To’s: The Present.me blog (Links to an external site.). (n.d.). Retrieved from http://blog.present.me/how-tos/

Jing (Links to an external site.) (https://www.techsmith.com/jingv.html)

Personnel Center (Links to an external site.) (http://personnelcenter.org/choose.cfm)

Present me (Links to an external site.) (https://present.me/)

Tomlinson, C. A. & Moon, T.R. (2013). Chapter 1. Differentiation: An overview. Assessment and Student Success in a Differentiated Classroom (Links to an external site.). Retrieved from http://www.ascd.org/publications/books/108028/chapters/Differentiation@-An-Overview.aspx g

Writing effective lesson plans (Links to an external site.). (2016). Retrieved from http://www.prometheanplanet.com/en-us/professional-development/best-practice/lesson-plans/