Using A-B-A-B And Multiple Baseline Designs

For this Assignment, you will defend multiple methods of single-subject designs and analyze study data to suggest interventions.

To prepare:

· Review the data provided in the articles by Ciftci, H. D., & Temel, Z. F. (2010) and Evmenova, A. S., Graff, H. J., Jerome, M. K., & Behrmann, M. M. (2010). Note the use of A-B-A-B and multiple baselines approaches in these studies and how you might defend the use of these approaches.

Save your time - order a paper!

Get your paper written from scratch within the tight deadline. Our service is a reliable solution to all your troubles. Place an order on any task and we will take care of it. You won’t have to worry about the quality and deadlines

Order Paper Now

· Review the course text readings for this module, as well as the media. Think about the concepts and ideas present in these Learning Resources and how they might inform your interpretation of the data in the research studies.

Compose a 4–5 page paper in which you:

· Defend the use of A-B-A-B and multiple baselines designs in the Ciftci, H.D., et. al. (2010) and Evmenova, A.S., et. al. (2010) studies. In your response, provide a rationale for using a different approach to each study.

· Based upon the graphics, analyze the data presented in each study. Explain what the data tells you, your interpretation of the results, and the interventions the data suggest.

Learning Resources

Note: To access this module’s required library resources, please click on the link to the Course Readings List, found in the Course Materials section of your Syllabus.

Required Readings

Florian, L. (Ed.). (2014). The SAGE handbook of special education (2nd ed.). London, England: Sage.

  • Chapter 22, “The Applied Science of Special      Education: Quantitative Approaches, the Questions They Address, and How      They Inform Practice” (pp. 369–388)

    Focus on quantitative designs and why they are key for      research in the field of SPED.

Rumrill, P. D., Cook, B. G., & Wiley, A. L. (2011). Research in special education: Designs, methods, and applications. Springfield, IL: Charles C. Thomas.

  • Chapter 6, “Quantitative Research Designs”      (pp. 118–152)

    Focus onthe description of single-subject research. Consider the      most important aspects of this approach to research. Review the quality      indicators of single-subject research.

O’Neill, R. E., McDonnell, J. J., Billingsley, F. F., & Jenson, W. R. (2011). Single case research designs in educational and community settings. Upper Saddle River, NJ: Pearson.

  • Chapter 2, “Defining What to      Measure and How to Measure It” (pp. 15–38)

    Focus ondefining the target behavior, dimensions of the behavior to      be measured, and measurement procedures. Consider the importance of      consistency in measurement.

  • Chapter 3, “Internal and      External Validity and Basic Principles and Procedures of Single Case      Research (SCR) Designs” (pp. 39–48)

    Focus ondefinitions ofinternal and external validity as they relate      to single-subject research. Pay particular attention to common basic      principles. Study the procedures of single-subject designs.

  • Chapter 4, “Making Sense of      Your Data: Using Graphic Displays to Analyze and Interpret It”(pp. 49–66)

    Focus onthe purposes of graphic displays of data. Note the      characteristics and the process of analyzing the data that are presented.      Review the questions that guide a comprehensive analysis.

  • Chapter 5, “Common Steps and Barriers You May      Have to Deal With in Conducting a Research Study” (pp. 67–78)

    Focus onthe common steps and challenges to conducting a research      study. Consider methods for overcoming challenges in the design of your      own research.

Additional Resources

Although every Additional Resource is not required reading, it is highly recommended that you read all of the Additional Resources. Be sure to make note of the Additional Resources which align with the content and focus of Discussions and Assignments.

Note: The resources were selected for the quality of the information and examples that they contain and not the date of publication.

Ciftci, H. D., & Temel, Z. F. (2010). A comparison of individual and small-group instruction with simultaneous prompting for teaching the concept of color to children with a mental disability. Social Behavior & Personality: An International Journal, 38(4), 479–493.

Retrieved from the Walden Library databases.

Focus on the approach to single-subject research. Note that an inter-subject multiple probing was used in this investigation. Pay specific attention to the measurement of the subjects’ developmental levels.

Evmenova, A. S., Graff, H. J., Jerome, M. K., & Behrmann, M. M. (2010). Word prediction programs with phonetic spelling support: Performance comparisons and impact on journal writing for students with writing difficulties. Learning Disabilities Research & Practice, 25(4), 170–182.

Retrieved from the Walden Library databases.

Focus on the changing conditions single-subject design. Study how it was used and replicated across subjects. Read about social validity.

Parker, R. I., Vannest, K. J., & Brown, L. (2009). The improvement rate difference for single-case research. Exceptional Children, 75(2), 135–150.

Retrieved from the Walden Library databases.

Focus on the style of field test for summarizing single-case research data. Recognize the improvement rate difference. Consider how it is calculated.

American Institutes for Research. (n.d.). National Center for Technology Innovation (NCTI). Retrieved from http://www.air.org/project/national-center-technology-innovation-ncti/

Focus on the real world examples of single-subject research designs. Note the specific elements. Review the description of single-subject research.

Required Media

Laureate Education (Producer). (2012). Introduction to single-subject design [Video file]. Baltimore, MD: Author.

Note: The approximate length of this media piece is 3 minutes.

In this media program, Dr. Terry Falcomata explains Single-Subject Design.

Focus on single-subject design as a quantitative research approach that allows researchers, clinicians, and educators to establish experimental control in answering a question of some clinical or educational relevance. Reflect on how the use of single-subject design can demonstrate that an intervention or program reliably produces positive changes in important behaviors or skills.

Accessible player  –Downloads– Download Video w/CC Download Audio Download Transcript

Laureate Education (Producer). (2012). A-B-A-B single-subject design [Video file]. Baltimore, MD: Author.

Note: The approximate length of this media piece is 4 minutes.

In this media program, Dr. Terry Falcomata explains the A-B-A-B Single-Subject Design.

Focus on the example of the experiment that uses an A-B-A-B single-subject design. Note that it is sometimes referred to as a withdraw or reversal design. Consider how it uses repeated measures of a behavior strategically across baseline and intervention conditions.

Accessible player  –Downloads– Download Video w/CC Download Audio Download Transcript

Learning Disabilities Research

Learning Disabilities Research & Practice, 25(4), 170–182 C© 2010 The Division for Learning Disabilities of the Council for Exceptional Children

Word Prediction Programs with Phonetic Spelling Support: Performance Comparisons and Impact on Journal Writing for Students

with Writing Difficulties

Anna S. Evmenova, Heidi J. Graff, Marci Kinas Jerome, and Michael M. Behrmann George Mason University

This investigation examined the effects of currently available word prediction software programs that support phonetic/inventive spelling on the quality of journal writing by six students with severe writing and/or spelling difficulties in grades three through six during a month- long summer writing program. A changing conditions single-subject research design was used and replicated across the participants. Using a daily writing prompt, students alternated between Co:Writer, WordQ, and WriteAssist word prediction programs. The results provided evidence for the effectiveness of various word prediction programs over word processing, and demonstrated improvements in spelling accuracy across conditions. Relative gains in the total number of words and composition rate were modest for the majority of the participants and should be interpreted with caution due to several methodological issues. The social validity interviews revealed that all students enjoyed the word prediction programs and found them beneficial. Study limitations and recommendations for future research are discussed.

In recent years there has been an increasing interest in tech- nology applications for students with high-incidence disabil- ities, including students who struggle with writing. Several applications have been discussed in the literature (Edyburn, 2005; Higgins & Raskind, 2004; Lewis, 1998, MacArthur, Ferretti, Okolo, & Cavalier, 2001). Writing is a complex skill and students may experience difficulties with a variety of as- pects, including mechanics and written content expression; however, computer-related technologies can enable students to bypass their deficits and support them through all stages of the writing process (Behrmann & Jerome, 2002; Lewis, 1998; Williams, 2002; Zhang, 2000).

To date, accumulated research has shown the effective- ness of word processors for ease of text alteration and ma- nipulation (Lewis, Ashton, Haapa, Kieley, & Fielden, 1998; MacArthur & Graham, 1987; MacArthur & Schwartz, 1990; Zhang, 2000), as well as spell checkers and other aids for ease of editing (Ashton, 1999; MacArthur, Graham, Haynes, & DeLaPaz, 1996; McNaughton, Hughes, & Ofiesh, 1997; Montgomery, Karlan, & Coutinho, 2001). Text-to-speech software programs that allow users to hear written prod- ucts have been found to be effective for accuracy monitoring (MacArthur, 1998, 1999; Raskind & Higgins, 1995). Outlin- ing and brainstorming programs that allow visual representa- tion of ideas have been shown to support users in planning and

Requests for reprints should be sent to Anna S. Evmenova, George Mason University, 4400 University Dr., MS 1F2, Fairfax, VA 22030. Electronic inquiries should be sent to aevmenov@gmu.edu.

organizing their writing (Anderson-Inman & Ditson, 1999; Blair, Ormsbee, & Brandes, 2002; Sturm & Rankin-Erickson, 2002). In addition, the use of speech recognition programs, which transform spoken words into text, have resulted in longer, more complex and accurate writing passages by stu- dents with learning disabilities and/or writing difficulties (De La Paz, 1999; Higgins & Raskind, 2000; MacArthur & Cav- alier, 2004; Quinlan, 2004; Raskind & Higgins, 1999).

Despite the available research, the area of assistive tech- nology for students with mild disabilities is still not fully developed (Edyburn, 2005). While some evidence of the effectiveness of the technology exists, few studies have in- vestigated its use by students with high-incidence disabil- ities in authentic learning environments, such as schools or other educational programs (Blackhurst, 2005; Edyburn, 2001). This is true for the utilization of word prediction soft- ware for students with learning disabilities. Word predic- tion programs were originally created for users with physical disabilities, and were designed to increase typing rate and decrease spelling errors (Tumlin & Heller, 2004). While the reduction in keystrokes addresses the needs of students with physical disabilities, additional features of word prediction software can be helpful in compensating for word recall, spelling, and handwriting difficulties faced by students with learning disabilities (Lewis, 1998). With a word prediction program, the user is offered a list of word choices as she/he begins to type the word. Word suggestions appear before or after the first letter of the word is entered. The word “pre- diction” feature allows for the words to be generated based

 

 

LEARNING DISABILITIES RESEARCH 171

on the lexical and grammatical context (MacArthur, 1999; Sitko, Laine, & Sitko, 2005). As with other assistive technol- ogy solutions, word prediction programs may yield writing products of higher quality if the features of the program are coordinated with the user’s abilities and needs (Ashton, 2005; Sitko et al., 2005). However, despite being a promising ap- plication, research on the use of word prediction software for students with writing difficulties is limited (MacArthur et al., 2001).

Previous Research

A majority of word prediction studies were conducted in the 1990s. Several recent studies have provided limited support for the positive effects of word prediction pro- grams on the writing readability/legibility and spelling of students with learning disabilities (Handley-More, 2003; MacArthur, 1998, 1999; Williams, 2002). In his original study, MacArthur (1998) compared the effects of speech synthesis and word prediction to word processing on the composition rate, spelling, and legibility of the written dia- logue journal entries. For four out of five students, the fea- tures offered by My Words word prediction program resulted in improved legibility and spelling in dialogue journal en- tries. The composition rate was not affected by the treatment. Later, MacArthur (1999) extended his original study by us- ing a more sophisticated Co:Writer word prediction program. In this latter study, students alternated between handwriting, word processing, and word prediction during free journal writing activities. Use of the word prediction program yielded improvement in the proportion of correctly spelled words for one out of three students and decreased the composition rate for two students. Technology had no effect on the proportion of legible words during free journal writing.

Furthermore, in the same article MacArthur (1999) re- ported the results of the second study conducted with the same participants and in the same conditions but engaging in a more demanding writing task. Students wrote from dicta- tion, thus increasing vocabulary demands. The results of the second study demonstrated improvements across legible and correctly spelled words as well as the reduced composition rate for two out of three students. Similarly, Williams (2002) and Handley-More (2003) reported relative improvements in the number and variety of words, as well as in the per- centage of legible and correctly spelled words. Thus, while existing research is somewhat ambiguous about the value of word prediction on composition rate and legibility of writ- ing by students with learning disabilities, this technology has been found to improve students’ spelling accuracy (Mirenda, Turoldo, & McAvoy, 2006).

MacArthur (1998, 1999) identified the lack of prediction from phonetic spelling as a contributing factor to the mixed results of his studies. Users had to know the exact beginning letter of the word in order for it to appear in the prediction list. Students with severe spelling problems did not benefit from the word prediction programs since they often did not know the correct initial letters of words (MacArthur, 1998). Word prediction technology has continued to progress so that software programs now recognize phonetic spelling in addi- tion to conventional spelling. Relevant predictions are now

possible based on the phonetic spelling of a word. Conse- quently, students with learning disabilities may experience more success using current, more advanced word prediction programs than was found in previous studies (MacArthur, 1998, etc.).

Research Questions

The purpose of this study was to examine the effects of the new generation of word prediction software programs that support phonetic spelling on the length, spelling accuracy, and composition rate of journal writing by students with se- vere writing and/or spelling difficulties. This study compared three different word prediction programs to include student preferences. The study was designed to replicate and extend the work of previous researchers (MacArthur, 1998, 1999). It addresses the following research questions:

1. Are there differences in length, spelling accuracy, or rate of journal writing for students with writing diffi- culties across different word prediction programs?

2. What were students’ reactions to using different word prediction programs?

A changing conditions single-subject research design was used and replicated across six students.

METHOD

Participants

Participants were students in grades 3 through 6 attending a 4-week long technology-based summer writing camp located at a major northeastern university. The camp uses technology and innovative computer software programs to enhance the writing skills of students identified by their parents and teach- ers as having difficulties with the writing process. The major- ity of the participants had been identified by their schools as having learning disabilities. Six of the 15 campers were iden- tified as candidates for word prediction intervention based on informal writing assessments as well as writing samples col- lected prior to the study. The writing assessments and samples provided data related to students’ phonetic spelling, vocab- ulary, word recall, keyboarding skills, and reading ability to differentiate between words on a prediction list.

It was critical to consider the effects of word processing and word prediction in light of students’ keyboarding skills and familiarity with the Word system. For this reason, par- ents were asked to report students’ level of familiarity with the computer and various software programs in the Parent Questionnaire that was distributed prior to the beginning of camp. According to this information, Students B, M, D, and J were sufficiently familiar, while Students C and R were comfortable enough with word processing to perform basic writing activities. Throughout the camp, due to its technolog- ical focus, all campers received more training and extended practice in basic word processing features.

Students keyboarding skills were determined via the Type- toLearn 3 program. This software reports students’ typed words per minute (wpm), accuracy percentage score, and the

 

 

172 EVMENOVA ET AL.: WORD PREDICTION PROGRAMS

TABLE 1 Demographic Data on Study Participants

Primary IEP Writing Participants Gender Age Ethnicity Disability Case Accuracy∗

Student B M 12 WH SLD WE, M 57% Student C M 9 WH SLD WE, RC 54% Student M M 10 AS AST N/A 71% Student D M 9 WH ADD N/A 67% Student J M 11 WH SLD WE 41% Student R M 9 WH SLD WE, M 49%

Note: Age = at the beginning of the study; WH = White; AS = Asian; SLD = specific learning disabilities; ADD = attention deficit disorder; AST = autism spectrum tendencies; WE = writing expression; RC = reading comprehension; M = math; ∗ – writing spelling accuracy level is based on informal writing assessment prior to the beginning of the camp.

number of errors (e.g., Warp Speed exercise) during lesson and practice sequences. In addition, teachers can set a speed goal in wpm for each student. At the beginning of the re- search study, Students B, C, M, D, and J were at the Ready for More Challenge/Intermediate level with the speed goal of 15 wpm demonstrating their best speed as 8, 7, 10, 8, and 7 wpm, respectively. Student R scored in the Young/Easily frus- trated level with 6 words per minute. Thus, at the beginning of the study, the participants demonstrated somewhat limited typing skills, ranging from 6 to 10 wpm on typing soft- ware, which usually overestimates typing skill as compared to skills demonstrated during writing composition. Students practiced their typing skills with TypetoLearn 3 and writing composition projects throughout the camp.

The following is a description of each individual student including their age, ethnicity, disability category, special ed- ucation services, and writing abilities (see Table 1). Parents were required to report this information on parent question- naires prior to the camp registration. Students’ writing abili- ties were determined based on informal writing assessments conducted before the beginning of the camp as well as on writing samples provided by parents. While campers were not required to provide copies of psychological tests or official achievement reports, some parents attached this information to the application packages. Subjects were predominantly Caucasian and “middle class.”

Student B was a rising seventh grader from a middle-class family who received special education services in an inclu- sive general education classroom. According to his parents and a camp teacher, Student B had difficulties with planning and organizing his writing. It was observed that he fixated on spelling and lost track of his thoughts. He also tended to spell words phonetically. According to WISC III, he performed in the average range for oral language and reading and the low range for written expression.

Student C was a rising fourth-grade boy from a high- income family. Student C received services in both special education resource and inclusion settings. According to his mother, the writing process was “physically difficult for him to do and for others to read” the finished product. He had cre- ative ideas that came from his well-developed imagination, but putting them down on paper was a challenge for him.

Student C was also identified as having autism spectrum ten- dencies. He had very strong opinions about what technology he wanted and/or refused to use for his writing. According to his report cards, Student C continuously needed improve- ment in reading, spelling, and written communication. At the time of the camp, his IEP goal in writing was to expand three supporting sentences in four out of five written assignments.

Student M was a rising fifth-grade boy from a high-income family. Student M was recommended for the summer camp because of his reluctance toward writing. He was also tested and found to have traits for autism spectrum disorder with a very high-functioning level. Student M was reported to be easily frustrated with the writing process when he could not think of the correct spelling. He found it hard to focus, organize, and convey his ideas in a cohesive document. He was considered a study participant for his phonetic spelling and his deficiencies of finding the right word to convey his ideas.

At the age of 9, student D was a rising fourth-grade boy from a middle-class family. He received up to 8 hours a week extra help in the general education classroom for difficulties with writing. His mother shared with the researchers that Student D “did not like to write and did so as little as possi- ble.” His biggest problem was attending to a task; therefore, he was considered a candidate for word prediction use to provide him with additional word choice support.

Student J was a rising sixth-grade boy from a medium- income family. He received special education services in inclusive settings. Student J was reported to have severe spelling difficulty; therefore, he was considered a good can- didate for the use of word prediction programs. His mother requested all writing assignments to be completed on a com- puter with additional help for “spelling issues.” Student J’s performance on TOWL (Test of Written Language) could not be scored because his paragraph could not be deciphered. His reading and comprehension were within the average range, slightly above grade level. His writing was reported to be well below grade level (early fourth-grade level).

Student R came from a middle-income family. This rising fourth-grade student received special education services in a resource classroom. After careful observations and consul- tations with his family, school teachers, and camp instruc- tor, Student R was determined as a study candidate to help him overcome his hesitancy to write. In addition, utilizing computer programs for writing addressed his fine motor and handwriting challenges.

Materials

All Conditions

In all conditions students were asked to write daily for 20 minutes in response to the journal entry prompt. The purpose of such journal writing is to provide students with more writ- ing opportunities and daily practice (Reagan, Mastropieri, & Scruggs, 2005). It usually is free of any kind of evaluation (Williams, 2002). Personal narrative prompts were randomly assigned to students from a pool of 30 preselected prompts. They were designed to be interesting and unbiased based

 

 

LEARNING DISABILITIES RESEARCH 173

on gender, ethnicity, and socioeconomic status (e.g., “What is your favorite part of the day?,” “What is something that makes you feel happy or sad?,” etc.)

Baseline Condition

In the baseline condition, students used Microsoft Word for journal writing. Students were not able to use spell checkers or grammar checkers during writing. All camp participants were characterized by poor handwriting skills that severely interfered with their ability to write. Besides that, handwrit- ing on paper was avoided as a baseline measure to control for the novelty of technology-medium integration effect. Clark (1983) noticed an increased level of effort and focus in re- search subjects as they were introduced to novel media. This increased attention seemed to diminish as they became more familiar with the technology medium. Thus, it was critical to compare students’ writing performance using word predic- tion programs to a similar technology-based instrument such as word processing.

Treatment Conditions

Students used three word prediction programs: Co:Writer, WordQ, and WriteAssist. With these programs, text is en- tered either in a separate program application or in Microsoft Word. As each letter is typed the list of predicted words ap- pears in a small window located by the cursor. If the intended word appears in the list, a student can select the word by clicking on it or typing the number for that word. The se- lected word is automatically inserted into the sentence. If the intended word does not appear in the predicted list, a student continues to type. All three programs provide speech feed- back, so students have the option to hear predicted words before selecting one of them. These programs also have an option for the teacher to decide how many words will appear in the prediction list (usually between one and nine). While the number of predicted words is usually based on an indi- vidual student’s needs, in this research project the number of predicted words was limited to five for all programs. All three programs have spell checkers built into them. However, for the sake of this study the spell checker option was disabled in all word prediction programs as well as in the word process- ing. While these three programs are similar in their features, they are slightly different in the level of sophistication and the size and diversity of the dictionary.

Co:Writer

Co:Writer SOLO Edition, at the time of the study, was the most recent version of the program developed by Don John- ston Inc. For the purposes of this study, the word prediction feature was used when students were offered word choices even before any letters were typed. With the FlexSpell fea- ture Co:Writer provides prediction choices based on pho- netic, inventive spelling typical of students with learning disabilities and writing difficulties. This feature considers the most common letter-patterns students may try in order

to spell a word (e.g., phonetic substitutions, common letter confusions, letter reversals, letter omissions, letter additions, letter doubling, and singling, etc.). Thus, the word prediction options do not depend on the correct first letters. In addi- tion, teachers can customize the spelling support for their students allowing for flexible spelling to be taken into con- sideration always, after three letters, or encouraging exact spelling only. In this study, the FlexSpell feature was set to always provide word prediction based on phonetic spelling. Co:Writer also utilizes Linguistic Word Prediction intelli- gence. This function ensures that the word prediction list offers grammatically correct word choices based on the con- text and previously used words (e.g., predicted verbs match plural nouns, etc.). Capitalization and punctuation are also reinforced within this program (e.g., predicted words are au- tomatically capitalized). Thus, its prediction choices facili- tate improvements in correct subject–verb agreement, proper spelling, capitalization, and appropriate pronoun and article use. Co:Writer SOLO Edition provides multiple ways to cus- tomize the program to students’ individual needs. For the pur- poses of this research study, all students used the intermediate dictionary template, which consists of 12,000 words covering many school subjects such as history, geography, and science. The intermediate dictionary supported the age and writing level of the campers without overwhelming them. In addition to the speech feedback, Co:Writer offers functions such as eWordBank and Topical Dictionary. These features support student’s writing on different topics and in different genres, predicting the most appropriate words for the selected topic and/or genre. However, eWordBank and Topical Dictionary features were not used during journal writing in this research study.

WordQ

WordQ by Quillsoft is a word prediction tool used with a standard word processor. This program appears as a simply four-button overlay toolbar for a standard word processor. This program bases word predictions on students’ creative writing and context, offers examples for commonly confused words, and provides speech feedback. It offers prediction op- tions even before the starting letter is entered based on what is most likely to be the next word. This program automati- cally capitalizes the next word after the period. Unlike other two-word prediction programs, by default WordQ does not provide grammar-based predictions, so correct syntax fully depends on students. It is possible to manipulate the pro- gram to see different word endings; however, this additional option was not used in this study. Several user vocabularies are available (e.g., U.S., U.K., Canadian, blank, starter [5,000 words], intermediate [10,500], and advanced [15,000]). U.S. intermediate vocabulary was used in this study to match students’ grade level. It included 10,500 words commonly used by seconnd- to eighth-grade writers. The program learns the user’s writing style and improves the prediction options. WordQ also offers an opportunity to expand the vocabulary by adding words one-by-one or creating new topic dictio- naries. As with CoWriter, this option was not used for the purposes of this study.

 

 

174 EVMENOVA ET AL.: WORD PREDICTION PROGRAMS

WriteAssist

WriteAssist by Second Guess software is a dyslexia-oriented word predictor. Program features include context-dependent prediction, which ensures that a student is offered suggestions even without typing anything. The program makes a predic- tion of the possible next word based on grammatical patterns and the context. It also provides optional automatic capi- talization of the first word in a sentence and proper nouns. WriteAssist includes a 30,000-word vocabulary pretrained from more than 30 million words of English text. Users are also supported with the speech-feedback feature. Just like the other two programs, WriteAssist constantly learns new words adding them to the prediction list based on the user’s writing style and word choices. However, in comparison with other two programs, WriteAssist does not provide as many additional features and can be considered to be the simplest program of the three.

Experimental Design

Changing conditions single-subject design was employed to investigate the effects of various word prediction programs on improving students’ writing (Alberto & Troutman, 2006). Prior to treatment, students’ baseline level of writing was collected for a minimum of three data points across the first week of camp. Following the stabilization of baseline, the first treatment was introduced. Students were randomly as- signed to one of the three logically preestablished orders of word prediction programs so that each student had an oppor- tunity to try three programs by the end of the study. During each following week students wrote using different programs, alternating the order across the participants. The random as- signment to a program for each particular week was used to control the influence of the increasing mastery and famil- iarity with word prediction skills (Figures 1–3). Changing treatments sequentially allowed the examination of various programs before finding one that was the most beneficial for each particular student. While changing conditions single- subject design is adequate for deciding which intervention works the best for each individual student, it cannot be used for establishing a functional relationships between the base- line and each of the conditions without the return to baseline (Kennedy, 2005). Since there was no return to baseline in this study, the results should be interpreted with great cau- tion, especially in those conditions when the improvements are inconclusive. In turn, the replication across students was used to support the possibility of stronger functional re- lationships between various word prediction programs and improvements in students’ writing as well as control for con- founding variables such as novelty of treatment (Clark, 1983; Weller, 1996) and acquisition of necessary skills (Alberto & Troutman, 2006).

Dependent Measures

The dependent variables examined included: total number of words, the proportion of correctly spelled words, and com- position rate by written words per minute.

Total Number of Words

Total number of words was calculated for each of the stu- dents’ writing samples. Proper nouns were counted as words while numerals (unless they were spelled out) were not counted as words. When scoring the total number of words, raters counted both correctly and incorrectly spelled words. The differences in length of writing were compared between word processing and word prediction, as well as among the three different programs.

Correctly Spelled Words

The proportion of correctly spelled words was calculated by dividing the number of correctly spelled words by the to- tal number of words. The raters marked each word that was spelled correctly (e.g., one word misspelled three times was counted as three mistakes). Words written in the incorrect tense or form were counted as spelling errors. Homonyms were considered misspelled. Words that were spelled cor- rectly but were inappropriately used within the context of the sentence (e.g., “peanut butter sandbox” vs. “peanut butter sandwich”) were also counted as spelling errors. Capitaliza- tion and punctuation were not considered to be errors, as these skills were not taught directly.

Composition Rate

Composition rate of typed words per minute was calculated by dividing the total number of words in students’ writing by the total minutes of composition time. Throughout the study, the beginning and ending writing times were recorded for each student.

Procedures

Once students’ and parents’ permissions were obtained, par- ticipants were asked to engage in journal writing for approx- imately 20 minutes at the beginning of each camp session over 4 weeks. The majority of the sessions lasted for 15– 20 minutes across the participants and conditions. However, a few students tended to write for 10 minutes or less (Stu- dent M in sessions 2, 3, 6, 8, 12, 14; Student J in session 15; and Student R in session 5), while some others wrote for 25 minutes or more (Student B in sessions 1, 4; Student C in sessions 4, 5, 7, 10, 17; and Student J in session 10). Students received personal narrative writing prompts daily. Later, during the day, students were engaged in other writing activities, including brainstorming, drafting, revising, edit- ing, and production. Thus, the purpose of the journal writing activity was to provide another opportunity to write without spending time on editing. However, if students wrote fewer than three sentences, they were asked to elaborate by provid- ing more details on the topic. Students were also encouraged to spell a word or choose it from the prediction list indepen- dently, without any help.

 

 

LEARNING DISABILITIES RESEARCH 175

FIGURE 1 Total number of words in journal entries across programs and students.

Baseline Condition

During the first week of camp, students wrote their jour- nal entries using a word processor without access to spell check. Depending on students’ typing and computer skills, they received instruction in typing and using the word proces- sor if needed. Such instruction included one-on-one training from the researcher as well as practice with the Type2Learn 3 software program, which provided interactive lessons to teach typing and improve speed. One of the prerequisites for participation in this study was that students had to pass the Young/Easily frustrated level associated with the typing speed goal of six words per minute. Handwriting was substi- tuted with word processing for the baseline condition, thus

fewer changes occurred when students were introduced to treatment conditions (Kennedy, 2005).

Treatment Condition

Prior to beginning the treatment, students received instruc- tion on how to use each of the word prediction programs. The training session started with a short PowerPoint pre- sentation with the basic information about word prediction. Then, all main features (e.g., checking the list, having each word from the list pronounced in order to make a word choice, speech feedback during writing, etc.) were demonstrated and practiced.

 

 

176 EVMENOVA ET AL.: WORD PREDICTION PROGRAMS

FIGURE 2 Proportion of words spelled correctly in journal entries across programs and students.

Participants were randomly assigned to the order in which they used programs weekly (see Figures 1–3). Each week the researcher modeled the particular program assigned to stu- dents for that time. Students learned how to start the program, enable the word prediction feature, utilize the speech feed- back feature, and where to look for the prediction list. The journal writing activity was simulated for students in order to address specific functions of the particular program. Each session followed a preestablished script to ensure consistency in how every participant was introduced to each program. After the training, students had time to practice using the software.

Interrater Reliability and Fidelity of Treatment

Interrater reliability was determined using printed writ- ing samples as permanent products. Random writing sam- ples (33 percent) were distributed to an independent observer to ensure scoring reliability. The interrater agree- ment on each of three dependent variables was calcu- lated using the total agreement method, the smaller total recorded by each observer was divided by the larger to- tal and multiplied by 100 percent (Kennedy, 2005). Inter- rater agreement averaged at 99 percent (ranging from 89 percent to 100 percent) for the total number of words,

 

 

LEARNING DISABILITIES RESEARCH 177

FIGURE 3 Composition rate of journal entries across programs and students.

proportion of correctly spelled words, as well as composition rate.

Fidelity of treatment data were collected during 33 percent of all sessions. The randomly observed activities were com- pared to the checklist of expected researcher’s actions. Those actions included introducing the topic choices; introducing word prediction programs according to the script; asking stu- dents to elaborate if they wrote less than three sentences; and encouraging and allowing students to spell words or se- lect a word from the prediction list without any additional help. Thus, it was ensured that the researcher and students were doing what they were supposed to do. Such observa- tions occurred both during trainings and journal writing. The

number of correct behaviors performed by the researcher and the participants was divided by the number of planned behav- iors and multiplied by 100 percent. Sessions during which technical difficulties occurred were excluded from the data analysis. Mean procedural reliability was 100 percent across all training and journal writing sessions.

Social Validity

The social validity of each intervention, each individual word prediction program, was examined through student interviews conducted during and at the end of the study.

 

 

178 EVMENOVA ET AL.: WORD PREDICTION PROGRAMS

Social validity as defined by Kennedy (2005) is “the estima- tion of the importance, effectiveness, appropriateness, and/or satisfaction various people experience in relation to a partic- ular intervention” (p. 219). As with any assistive technology device or program, it is very important to ensure a person’s willingness to use it. Student preferences for a program and its technological features often play an important role in its effectiveness. A large percentage of assistive technologies are abandoned because they do not meet a person’s needs and expectations (Scherer, 2005). It is critical to seek stu- dents’ input when technology is selected (Parette, Wojcik, Peterson-Karlan, & Hourcade, 2005).

RESULTS

Student performance on the total number of words, pro- portion of words spelled correctly, and composition rate was determined and analyzed through the visual analysis of mean lines in the data and the percentage of nonover- lapping data (PND) scores (Scruggs, Mastropieri, & Casto, 1987). While the changing conditions single-subject design used in this study is apt for the comparisons between the programs described in detail later, the following results on word processing versus word prediction are only prelimi- nary and should be considered with caution, especially in those cases when the changes are less noticeable and/or in- conclusive. However, some obvious differences between the baseline and various word prediction programs were noted as follows.

Word Processing Versus Word Prediction

Figures 1–3 illustrate the relative effectiveness of various word prediction programs over word processing, revealing evident improvements in the proportion of correctly spelled words by all the participants with each of the programs. While gains in the total number of words and composition rate were smaller, most students performed better on these two measures with at least one of the three word prediction programs as compared to word processing alone. Thus, the visual analysis demonstrates that participants increased in the total number of words averaging in gains from 21 words

TABLE 2 Descriptive Statistics (M(SD)) for Each Dependent Variable across Conditions and Participants

Total Number of Words Words Correctly Spelled Composition Rate

Students BL WQ CW WA BL WQ CW WA BL WQ CW WA

Student B 22.7 (8.5) 58 (13.8) 57.2 (6.4) 48.8 (5.1) 61 (4) 91.8 (4.4) 92 (7) 93.2 (3.1) 1.1 (0.4) 2.9 (0.7) 3.6 (0.4) 2.9 (0.4) Student C 33.3 (15.9) 63.8 (6.6) 39 (5.4) 37.8 (4.7) 61 (7) 98.4 (1.5) 96.6 (2.6) 97.4 (2.4) 1.5 (0.7) 2.5 (0.8) 1.7 (0.5) 1.9 (0.3) Student M 16.3 (3.1) 26.4 (6.3) 20 (5.5) 24 (5.3) 75.3 (1.2) 98.2 (2.5) 95.6 (7.4) 98 (2.7) 1.04 (0.1) 2.2 (1.5) 1.5 (0.6) 2.2 (0.4) Student D 14 (6.1) 23.8 (5.8) 20.6 (4.4) 31.2 (8.6) 59 (2) 99.2 (1.8) 97.8 (3.2) 100 (0) 0.7 (0.2) 1.6 (0.4) 1.4 (0.3) 2.1 (0.6) Student J 27.7 (1.2) 59 (18.7) 34.2 (3.8) 33 (10) 47 (5.6) 92 (5.7) 91.4 (5.3) 92.2 (5) 1.4 (0.1) 3.6 (0.9) 2.03 (0.7) 2.5 (1.4) Student R 12.7 (1.5) 29.8 (4.3) 22 (5.7) 16.4 (3.9) 44.7 (1.5) 98 (1.9) 92.4 (10.8) 95 (4.7) 0.8 (0.1) 1.6 (0.2) 1.4 (0.2) 0.9 (0.3) Total: 21.1 (6) 43.5 (9.3) 32.2 (5.2) 31.9 (6.3) 58 (3.5) 96.3 (3) 94.3 (5.4) 96 (3.7) 1.1 (0.3) 2.4 (0.7) 1.9 (0.4) 2.1 (0.6)

Note: BL = baseline, word processing condition; WQ = WordQ; CW = CoWriter; WA = WriteAssist; M (SD).

(SD = 6) to 36 words (SD = 6.9) across all students and all word prediction programs. In particular, the amount of writing doubled for Student B with all three programs, for Students C, J, and R with WordQ, for Student D with Write- Assist software. The gains were quite modest and Student M only slightly increased the amount of writing with all three programs. The overall performance of the participants on each measure can be seen from the descriptive statistics in Table 2.

The positive effects of word prediction on students’ writ- ing were more obvious from the analysis of the proportion of correctly spelled words. On average students increased their spelling accuracy from 58 percent (SD = 3.5) to 96 percent (SD = 4) across all the programs. In addition, with the composition rate, students wrote relatively faster with all the programs improving from 1.1 wpm (SD = 0.3) to 2.4 wpm (SD = 0.7) when using WordQ; to 1.9 wpm (SD = 0.4) when using CoWriter; and to 2.1 wpm (SD = 0.6) when using WriteAssist. However, the individual performance of each participant with different word prediction programs on the composition rate measure should be noted, as changes were more evident for some students than others. The overall gains in students’ writing from word processing to word pre- diction were corroborated by the PND scores across students and across all three programs with the average 80 percent improvement in the total number of words, 100 percent in the proportion of correctly spelled words, and 84 percent in the composition rate. However, the replication of this study using a different single-subject research design is needed to suggest the effectiveness of word prediction over word pro- cessing.

Total Number of Words With Different Word Prediction Programs

As can be seen from Figure 1, almost all students demon- strated a relative increase in the total number of words from the baseline to the word prediction condition. However, the contrast was more evident for students using some programs than others. According to the visual analysis and mean val- ues, Students C, J, and R demonstrated clear improvements in the total number of words when using the WordQ word prediction program (improvements from an average of 33

 

 

LEARNING DISABILITIES RESEARCH 179

(SD = 15.9) words to 64 (SD = 6.6) for Student C; from 28 (SD = 1.2) to 59 (SD = 18.7) for Student J; and from 13 (SD = 1.5) to 30 words (SD = 4.3) for Student R). At the same time, Student C and J’s performance when using CoWriter and WriteAssist, as well as Student R’s writing when us- ing WriteAssist demonstrated little if any improvement over word processing (e.g., Student C did not write more with either CoWriter or WriteAssist).

While Students B and M showed similar progress with all three programs, they appeared to produce a slightly larger number of words, particularly with the WordQ program (M = 58; SD = 13.8 and M = 26.4; SD = 6.3, respectively). However, while Student B’s amount of writing increased suf- ficiently with all three programs, changes in Student M’s writing were only minor. Student D’s graphed data indicated effectiveness of the WriteAssist software only (M = 31.2; SD = 8.6).

The percentage of nonoverlapping data corroborates the conclusions drawn from the visual analysis. Thus, Students C, J, and R scored 100 percent PND when using WordQ demonstrating 100 percent improvement in the total number of words as compared to the word processor. The PND scores for Student B were 100 percent for each of the programs. Student D’s PND scores reached 100 percent PND with the WriteAssist program only. Given changes in the total number of words with the other word prediction software for the aforementioned participants and Student M’s modest writing gains with all three programs, his results were considered unreliable and were not interpreted.

Proportion of Words Spelled Correctly With Different Word Prediction Programs

Visual inspection of the data points for the proportion of words spelled correctly across the three different programs suggested the following results. All students considerably improved their spelling when using word prediction software compared to word processing regardless of the program. Stu- dents showed similar gains in the mean values with each of the three word prediction programs. Student B improved from an average of 61 percent (SD = 4) spelling accuracy to 92.3 percent (SD = 4.8) with word prediction. Student C’s spelling accuracy increased from 61 percent (SD = 7) in the baseline to 97.5 percent (SD = 2.2) using software. Student M went from 75.3 percent (SD = 1.2) in the base- line to 97.3 percent (SD = 4.2) with the programs. Student D increased from an average of 59 percent (SD = 2) in the baseline to 99 percent (SD = 2.5) in the treatment condition. Student J improved from 47 percent (SD = 1.5) using word processing in the baseline condition to 91.9 percent (SD = 5.3) using word prediction. Student’s R gains in the spelling accuracy measure went from 44.7 percent (SD = 1.5) in the baseline to the averaged 95.1 percent (SD = 5.8) with the treatment. Individual gains in spelling accuracy with each of the three word prediction programs can be found in Table 2. All students accelerated immediately demonstrating 100 per- cent PND across all word prediction conditions as compared to word processing.

Composition Rate With Different Word Prediction Programs

Students increased their composition rates to differing de- grees and with different results depending upon the word prediction program. A few consistent and evident through the visual analysis changes are as follows. Graphic repre- sentation of data points in composition rate for Student B revealed an improvement from 1.1 (SD = 0.4) wpm in the baseline to 3.6 (SD = 0.4) wpm with Co:Writer, to 2.9 wpm (SD = 0.7) with WordQ, and to 2.9 wpm (SD = 0.4) with WriteAssist. One hundred percent PND score supported this conclusion. Graphic representations for Students M and D suggested a slightly higher composition rate with the Write- Assist program. The mean lines increased from 1.04 wpm (SD = 0.1) to 2.2 wpm (SD = 0.4) for Student M and from 0.7 wpm (SD = 0.2) to 2.1 wpm (SD = 0.6) for Student D. Visual analysis for Students C, J, and R indicated a higher composition rate when writing with WordQ. Student C went from 1.5 wpm (SD = 0.7) to 2.5 (SD = 0.8); Student J from 1.4 wpm (SD = 0.1) to 3.6 wpm (SD = 0.9); Student R from 0.8 wpm (SD = 0.1) to 1.6 wpm (SD = 0.2). PND scores (80–100 percent) corroborated the effectiveness of WordQ software over other programs for these students. Addition- ally, for student R, CoWriter was associated with an increase from 0.8 wpm (SD = 0.1) to 1.4 wpm (SD = 0.9) and 100 percent PND score in composition rate.

The performance of Students M and D on the composition rate measure while using WordQ and CoWriter programs as well as the rate of Student J with CoWriter and WriteAssist and Student R with WriteAssist is characterized by minor and inconsistent changes (see Figure 3). Given such methodolog- ical issues as limited typing skills of the study participants discussed later, these results should only be viewed as in- conclusive. In addition, no changes in composition rate were observed for Student C while using WriteAssist and CoWriter programs.

Social Validity

Overall, all the students enjoyed the word prediction pro- grams and found them beneficial. They indicated that writing was much easier when they used word prediction. Student M noted that he did not have to write the whole word and the program would finish it for him. Another student mentioned that word prediction made him type words faster. One more example of the advantage of word prediction as reported by Student B was that it “helped find words and see if they were correct or not in order to use them.” In addition, Student C reported that word prediction made him think faster.

In regard to which program students found the most help- ful and enjoyable, four out of six students preferred WordQ to other programs, while the other two students liked Write- Assist and Co:Writer the most. Students who chose WordQ referred to it as the fastest and having “better words.” One student mentioned that “it was like telepathic,” because “the words came up” as you were just thinking about them. An- other student enjoyed that the program read the sentences “exactly as you read them.” Student C refused to use other

 

 

180 EVMENOVA ET AL.: WORD PREDICTION PROGRAMS

programs because they did not have as many voices as WordQ did. As for those students who did not prefer WordQ, they noted that constant speech-feedback was annoying and that the window although smaller “moved around to places where they did not want it.” Both of those problems were eliminated within program options as students expressed their opinions. The main problem with the WriteAssist program as reported by students was the big window that “did not move and cov- ered the words.” WriteAssist was found to have less word choices. In addition, the vocabulary was not appropriate for students since it offered “bad words” as reported by students and teachers. Co:Writer had more “technical glitches” within the software in comparison to other programs. Since the time of the study several glitches were addressed and solutions offered on the manufacturer’s website.

The teachers supported students’ opinions and preferred WordQ to the other word prediction programs. They com- mented it was the easiest to use while offering a large choice of features. WordQ’s four-button toolbar was “very straight forward” and simple to handle. Teachers noticed vocabulary issues with WriteAssist as it offered curse words. One teacher described it as the “most primitive of all.” In addition, one teacher with vision impairments noted that the WriteAssist program “would be hard to use with a screen enlarging pro- gram,” as the prediction window stays in the locked position on the screen as opposed to WordQ and the word window of Co:Writer. In turn, the Co:Writer program had “cleaner language” and had more features for students. However, it was also more difficult to use for younger students in this study and presented more technological difficulties.

DISCUSSION AND PRACTICAL IMPLICATIONS

The primary goal of this study was to explore the effects of various word prediction programs on students’ journal writing as compared to word processing. Consistent with previous research (Handley-More, 2003; MacArthur, 1998, 1999; Williams, 2002) the results of this study demonstrated the relative effectiveness of word prediction on various as- pects of the writing process for some students with writing difficulties as compared to word processing alone. Word pre- diction regardless of the software was effective in improving written spelling accuracy as measured by the proportion of words spelled correctly for all the participants. Writing per- formance increased in the total number of words for one student with each of the three programs and for the other five students with at least one of the programs. Composition rate noticeably improved for one student with each program, for four students with at least one of the programs, and did not visibly change for one of the participants.

The latter two measures need to be discussed considering one major limitation of this study. The proficiency in typing skills is essential to students’ success in word processing. Otherwise, the computer can be a frustrating and inefficient tool to use (MacArthur & Graham; 1987). While all parents reported students’ familiarity with the computer and word processor, the participants in this study were characterized by limited typing skills ranging from 6 to 10 words per minute. The existing research reports that students, particularly boys in third to sixth grades are able to handwrite 43–78 charac-

ters per minute, which is approximately eight to 15 words per minute (Chwirka, Gurney, & Burtner, 2002; Graham, 1999; Graham, Berninger, Weintraub, & Shafer, 1998; Rosenblum, Weiss, & Parush, 2003). If the participants in this study were able to type 6–10 words per minute, they were approach- ing the production consistent with handwriting speed at their grade level. However, the students’ typing skills were deter- mined based on the tests within the typing software, which overestimates the speed. Thus, possible weak keyboarding skills should be recognized when discussing the total num- ber of words and composition rate measures.

Overall, the improvements in the amount and the writing speed were less obvious than gains in spelling. Due to lim- ited typing skills and the nature of the changing conditions single- subject research design, these results should be inter- preted with great caution (Kennedy, 2005). The absence of a return to baseline prior to starting each new program alters the conclusions about functional relationships between word processing and each word prediction condition. However, this study presents unique information comparing different word prediction programs. External validity of this study is enhanced through replication across different participants and random assignment of students to a different order of program implementation (Horner et al., 2005).

Overall, regardless of the order in which it was intro- duced, three and four students performed relatively better with WordQ on the total number of words and composition rate, respectively. Student D demonstrated a greater num- ber of words and faster composition rate with WriteAssist. Student B’s performance improved similarly with all three word prediction programs, so the recommendation of one of them depended solely on his preferences. As far as the pro- portion of words spelled correctly, written spelling accuracy improved similarly for all of the students while using each of the three word prediction programs, thus demonstrating over- all effectiveness of word prediction over word processing.

Social validity of the goals, procedures, and effects were examined through student and teacher interviews. Both stu- dents and teachers enjoyed using word prediction programs and found them helpful for the writing process. All stu- dents benefited from word prediction features that supported their writing difficulties. For example, a student with fine motor/handwriting difficulties mentioned the ability to type faster with word prediction. Furthermore, a student who ex- perienced difficulties with putting his ideas on paper reported that word prediction made him “think faster.”

A majority of students and teachers preferred WordQ to the other two word prediction programs due to its extended and appropriate vocabulary and ease of use, despite the fact that WordQ’s intermediate vocabulary level had the least num- ber of words as compared to other two programs. It was interesting that several students referred to WordQ’s “tele- pathic” abilities, recognizing its contextual prediction. Al- though Co:Writer and WriteAssist also have similar features, WordQ predictions were identified as more precise by both students and teachers. One student who preferred WriteAs- sist performed better with that program on all dependent variables. The student who chose Co:Writer demonstrated equally significant improvements with all the programs, so following his preferences, Co:Writer was recommended as a word prediction program for him.

 

 

LEARNING DISABILITIES RESEARCH 181

Based on the results of this study, Co:Writer was reported as being more difficult to use due to the extensive number of features. This suggests that Co:Writer may be a better choice for older students who could use and benefit from those features. WriteAssist was the only program that offered users up to 30 choices in the prediction list. This program was also reported as having a more grown-up-oriented vocabulary, suggesting its greater value for adults, including those with physical disabilities. It is also important to note that a student with ADD considered the prediction window moving with a cursor annoying and distracting. So, it may be important to consider turning off such features in WordQ and Co:Writer with “word window” prediction. In addition, for students with autism spectrum tendencies a program with a larger selection of voices (WordQ) could be preferable.

When interpreting the results of this study, it is important to remember that the presented word prediction programs offer a number of extended features that can further enhance students’ writing (e.g., eWordBank in CoWriter and Topical Dictionaries in Co:Writer and WordQ). These features were not used in this study. This was the first attempt to investigate the rate of production and student preferences of three word prediction programs while controlling for any additional, ex- traneous characteristics. Thus, it is impossible to fully com- pare these three programs based on this study alone. More comprehensive evaluation of extended features conducted during a longer period of time is necessary in order to more thoroughly investigate each program’s effectiveness.

Limitations and Future Research

Unfortunately, interpretation of the findings from this study is clouded by a number of methodological issues. First of all, the keyboarding skills of participating students were quite weak at the beginning of the study. Participants were not retested on these skills at the end of the study, so it is im- possible to say how their typing speed prior to and following the study affected the results. Second, the short duration of the writing camp guided the number of journal writing ses- sions randomly assigned for each word prediction program. Thus, time also influenced the choice of the research design that excluded return to baseline, preventing the establishment of a stronger functional relationship between different word prediction programs and improvements in students’ journal writing. Furthermore, the length of the summer camp hin- dered the researchers’ ability to test maintenance and level of continuous improvement of students writing with a par- ticular word prediction program. Finally, the research setting at CompuWrite summer camp was different from a gen- eral education classroom where a majority of participants received special education services. Thus, generalization of the writing improvement with word prediction program was not assessed in a realistic school environment.

At the same time, this pilot study provided several areas for future research. First, the study could be replicated allow- ing longer periods of time for each word prediction program and employing different single-subject research designs (e.g., multiple baseline across participants) to establish stronger functional relationships between the writing improvement and each word prediction program. Second, it is important to replicate this study with participants with higher typing skills

to explore the true impact of word prediction on students’ writing. Third, it would be interesting to examine the effec- tiveness of different word prediction programs on other more meaningful writing activities that require editing in more nat- ural school settings. Such research may also focus on more extensive features of word prediction programs including pre- diction based on topic and genre. Last, a majority of research studies examining the effectiveness of word prediction are single-subject research studies (Siko et al., 2005). This may be explained by the specificity of word prediction programs that makes them appropriate for students with very specific abilities and needs. Such studies, including the present one, discourage interpretation and generalization of findings to a larger population. Group design experimental studies with a large number of participants would provide generalizable information on word prediction effectiveness.

REFERENCES

Alberto, P. A., & Troutman, C. A. (2006). Applied behavior analysis for teachers (7th ed.). Upper Saddle River, NJ: Pearson Prentice Hall.

Anderson-Inman, L., & Ditson, L. (1999). Computer-based concept map- ping: A tool for negotiating meaning. Learning and Leading with Tech- nology, 26(8), 6–13.

Ashton, T. M. (1999). Making technology work in the inclusive classroom: A spell CHECKing strategy for students with learning disabilities. Teaching Exceptional Children, 32(2), 24–27.

Ashton, T. M. (2005). Students with learning disabilities using assistive technology in the inclusive classroom. In D. L. Edyburn, K. Higgins, & R. Boone (Eds.), Handbook of special education technology research and practice (pp. 229–238). Whitefish Bay, WI: Knowledge by Design, Inc.

Behrmann, M., & Jerome, M. K. (2002). Assistive technology for students with mild disabilities: Update 2002. Arlington, VA: ERIC Clearing- house on Disabilities and Gifted Education.

Blackhurst, A. E. (2005). Historical perspectives about technology appli- cations for people with disabilities. In D. L. Edyburn, K. Higgins, & R. Boone (Eds.), Handbook of special education technology re- search and practice (pp. 3–29). Whitefish Bay, WI: Knowledge by Design.

Blair, R. B., Ormsbee, C., & Brandes, J. (2002). Using writing strategies and visual thinking software to enhance the written performance of students with mild disabilities. ERIC Document Reproduction Service No. ED463125.

Chwirka, B., Gurney, B., & Burtner, P. A. (2002). Keyboarding and visual- motor skills in elementary students: A pilot study. Occupational Ther- apy in Health Care, 16(2–3), 39–51.

Clark, R. E. (1983). Reconsidering research on learning from media. Review of Educational Research, 53, 445–459.

De La Paz, S. (1999). Composing via dictation and speech recognition sys- tems: Compensatory technology for students with learning disabilities. Learning Disabilities Quarterly, 22, 173–182.

Edyburn, D. L. (2001). Critical issues in special education technology re- search: What do we know? What do we need to know? In M. Mastropieri & T. Scruggs (Eds.), Advances in learning and behavioral disabilities (Vol. 15, pp. 95–118). New York: JAI Press.

Edyburn, D. L. (2005). Assistive technology and students with mild disabili- ties: From consideration to outcome measurement. In D. L. Edyburn, K. Higgins, & R. Boone (Eds.), Handbook of special education technology research and practice (pp. 239–270). Whitefish Bay, WI: Knowledge by Design.

Graham, S. (1999). Handwriting and spelling instruction for students with learning disabilities: A review. Learning Disability Quarterly, 22(2), 78–98.

Graham, S., Berninger, V., Weintraub, N., & Shafer, W. (1998). The devel- opment of handwriting fluency and legibility in grades 1 through 9. Journal of Educational Research, 92, 42–52.

Handley-More, D. (2003). Facilitating written word using computer word processing and word prediction. American Journal of Occupational Therapy, 57(2), 139–151.

 

 

182 EVMENOVA ET AL.: WORD PREDICTION PROGRAMS

Higgins, E. L., & Raskind, M. H. (2000).Speaking to read: The effects of continuous vs. discrete speech recognition systems on the reading and spelling of children with learning disabilities. Journal of Special Education Technology, 15(1), 19–30.

Higgins, E. L., & Raskind, M. H. (2004). Speech recognition-based and automaticity programs to help students with severe reading and spelling problems. Annals of Dyslexia, 54(2), 173–177.

Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2005). The use of single-subject research to identify evidence- based practice in special education. Exceptional Children, 71, 165– 179.

Kennedy, C. H. (2005). Single-case designs for education research. Boston: Allyn and Bacon.

Lewis, R. (1998). Assistive technology and learning disabilities: Today’s realities and tomorrow’s promises. Journal of Learning Disabilities, 31, 16–26.

Lewis, R. B., Graves, A. W., Ashton, T. M., & Kieley, C. L. (1998). Word processing tools for students with learning disabilities: A comparison of strategies to increase text entry speed. Learning Disabilities Research & Practice, 13, 95–108.

MacArthur, C. (1998). Word processing with speech synthesis and word prediction: Effects on the dialogue journal writing of Students with learning disabilities. Learning Disabilities Quarterly, 21, 151– 166.

MacArthur, C. A. (1999). Word prediction for students with severe spelling problems. Learning Disability Quarterly, 22, 158–172.

MacArthur, C. A., & Cavalier, A. R. (2004). Dictation and speech recogni- tion technology as test accommodations. Exceptional Children, 71(1), 43–58.

MacArthur, C., Ferretti, R., Okolo, C., & Cavalier, A. (2001). Technology applications for Students with literacy problems: A critical review. The Elementary School Journal, 101(3), 273–301.

MacArthur, C., & Graham, S. (1987). Learning disabled students’ com- posing under three methods of text production: Handwriting, word processing, and dictation. Journal of Special Education, 21, 22–42.

MacArthur, C. A., Graham, S., Haynes, J. A., & De La Paz, S. (1996). Spelling checkers and students with learning disabilities: Performance comparisons and impact on spelling. Journal of Special Education, 30, 35–57.

MacArthur, C., & Schwartz, S. S. (1990). An integrated approach to writing instruction: The Computers and Writing Instruction Project. LD Forum, 16(1), 35–41.

McNaughton, D., Hughes, C., & Ofiesh, N. (1997). Proofreading for stu- dents with learning disabilities: Integrating computer and strategy use. Learning Disabilities Research & Practice, 12, 16–28.

Mirenda, P., Turoldo, K., & McAvoy, C. (2006). The impact of word predic- tion software on the written output of students with physical disabilities. Journal of Special Education Technology, 21(3), 5–12.

Montgomery, D. J., Karlan, G. R., & Coutinho, M. (2001). The effectiveness of word processor spell checker programs to produce target words for misspellings generated by students with learning disabilities. Journal of Special Education Technology, 16(2), 27–40.

Parette, H. P., Wojcik, B. W., Peterson-Karlan, G., & Hourcade, J. J. (2005). Assistive technology for students with mild disabilities: What’s cool and what’s not. Education and Training in Developmental Disabilities, 40, 320–331.

Quinlan, T. (2004). Speech recognition technology and students with writing difficulties: Improving fluency. Journal of Educational Psychology, 96, 337–346.

Raskind, M. H., & Higgins, E. H. (1995). The effects of speech synthesis on proofreading efficiency of postsecondary students with learning disabilities. Learning Disability Quarterly, 18, 141–158.

Raskind, M. H., & Higgins, E. L. (1999). Speaking to read: The effects of speech recognition technology on the reading and spelling performance of children with learning disabilities. Annals of Dyslexia, 49, 251–281.

Reagan, K. S., Mastropieri, M. A., & Scruggs, T. E. (2005). Promoting expressive writing among students with emotional and behavioral dis- turbances via dialogue journals. Behavioral Disorders, 31, 35–52.

Rosenblum, S., Weiss, P. L., & Parush, S. (2003). Product and process evaluation of handwriting difficulties. Educational Psychology Review, 15, 41–81.

Scherer, M. J. (2005). Living in the state of stuck: How assistive technology impacts the lives of people with disabilities (4th ed.). Brookline, MA: Brookline Books.

Scruggs, T. E., Mastropieri, M. A., & Casto, G. (1987). The quantitative synthesis of single-subject research: Methodology and validation. Re- medial and Special Education, 8(2), 24–33.

Sitko, M. C., Laine, C. J., & Sitko, C. J. (2005). Writing tools: Technology and strategies for struggling writers. In D. L. Edyburn, K. Higgins, & R. Boone (Eds.), Handbook of special education technology research and practice (pp. 571–598). Whitefish Bay, WI: Knowledge by Design

Sturm, J. M., & Rankin-Erickson, J. L. (2002). Effects of hand-drawn and computer-generated concept mapping on the expository writing of mid- dle school students with learning disabilities. Learning Disabilities Research & Practice, 17, 124–139.

Tumlin, J., & Heller, K. (2004). Using word prediction software to increase typing fluency with students with physical disabilities. Journal of Spe- cial Education Technology, 19(3), 5–14. Retrieved September 24, 2006, from http://jset.unlv.edu/19.3/tumlin/first.html.

Weller, H. G. (1996). Assessing the impact of computer-based learning in science. Journal of Research on Computing in Education, 28, 461–485.

Williams, S. (2002). How speech-feedback and word prediction software can help students write. TEACHING Exceptional Children, 34, 72–78.

Zhang, Y. (2000). Technology and the writing skills of students with learning disabilities. Journal of Research on Computing in Education, 32, 467– 478.

About the Authors

Anna S. Evmenova is an assistant professor of special education at George Mason University, Virginia. Her current research interests focus on assistive and instructional technology tools for providing academic content-based instruction to students with disabilities.

Heidi J. Graff, Ph.D., is the director of the Mason LIFE program at George Mason University. Her interests are post-secondary education for students with intellectual and developmental disabilities as well as technology tools for improved instruction in the general education setting.

Marci K. Jerome is an assistant professor, Department of Special Education, George Mason University. She earned her Ph.D. in Special Education in 2008 at George Mason University. Her current research interests include evaluating the effectiveness of assistive technology tools on student learning and distance education in special education teacher preparation.

Michael M. Behrmann is a professor of special education and the director of the Helen A. Kellar Institute for Human DisAbilities at George Mason University. He is a leader/innovator in special education and technology and his research interests include assistive technology, instructional design, universal design for learning and distance education.

 

 

Copyright of Learning Disabilities Research & Practice (Blackwell Publishing Limited) is the property of

Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without

the copyright holder’s express written permission. However, users may print, download, or email articles for

individual use.