Benchmark – Language Disabilities And Assistive Technology Unit Plan

Details:

Teachers consider many factors when developing unit plans to meet the needs of a variety of students. It is important to consider the strengths and needed accommodations of students in the classroom when developing lessons and units. If there are students with specific language impairments, the teacher must also consider ways to facilitate communication and engagement during classroom instruction. There may need to be a plan to pre-teach vocabulary, or plan certain questions that can be answered by students who may use a device or other mode of communication.

Read the following case scenario to inform the assignment that follows.

April is a fourth grader. Her performance on norm-referenced measures is 1.5 standard deviations below the mean for her chronological age. April has good decoding skills, but has difficulty with reading comprehension, semantics, and morphological processing. One accommodation that is prescribed in the IEP is the use of visual cues to support comprehension of new skills. She lacks organizational skills for writing and struggles with word choice. April receives services from a speech and language pathologist who is working on understanding word parts, vocabulary, and multiple meanings of words. You instruct April in a resource classroom with five other fourth graders who also struggle with reading and written expression.

Using details from the scenario, create a week-long English language arts unit plan based on the Common Core ELA fourth grade literacy standards specific to vocabulary acquisition and use.

Use the COE Lesson Plan Template to complete five formal lesson plans that include the following:

  1. A measureable IEP goal for April that includes assistive technology. Include this goal within the “Learning Target” section of the COE Lesson Plan Template.
  2. Learning targets aligned to the ELA Common Core fourth grade literacy standards.
  3. Strategies to enhance language development and communication skills.
  4. Strategies and technologies that encourage development of critical thinking and problem solving.
  5. The use of augmentative and alternative communication systems and a variety of assistive technologies to support communication and learning.
  6. A unit pre- and post-assessment that incorporates technologies to measure April’s measureable IEP goal progress.

Each lesson plan must be completed and submitted as a separate completed lesson plan.

In addition, provide a 250-500 word rationale that supports your instructional choices in responding to the needs of April, as evidenced by research on best practices for semantics disorders and the use of assistive technology. Support your rationale with a minimum of two scholarly resources.

Submit all lesson plans and rationale as one deliverable.

GCU College of Education

LESSON PLAN TEMPLATE

 

Section 1: Lesson Preparation

Teacher Candidate Name:

 

 

 
Grade Level:

 

 

 

Date:

 

 
Unit/Subject:

 

 
Instructional Plan Title:  

 

Lesson Summary and Focus: In 2-3 sentences, summarize the lesson, identifying the central focus based on the content and skills you are teaching.

 

Classroom and Student Factors/Grouping: Describe the important classroom factors (demographics and environment) and student factors (IEPs, 504s, ELLs, students with behavior concerns, gifted learners), and the effect of those factors on planning, teaching, and assessing students to facilitate learning for all students. This should be limited to 2-3 sentences and the information should inform the differentiation components of the lesson.

 

 

 

 

National/State Learning Standards: Review national and state standards to become familiar with the standards you will be working with in the classroom environment.

Your goal in this section is to identify the standards that are the focus of the lesson being presented. Standards must address learning initiatives from one or more content areas, as well as align with the lesson’s learning targets/objectives and assessments.

Include the standards with the performance indicators and the standard language in its entirety.

 

 

 

 

 

 

Specific Learning Target(s)/Objectives: Learning objectives are designed to identify what the teacher intends to measure in learning. These must be aligned with the standards. When creating objectives, a learner must consider the following:

· Who is the audience

· What action verb will be measured during instruction/assessment

· What tools or conditions are being used to meet the learning

 

What is being assessed in the lesson must align directly to the objective created. This should not be a summary of the lesson, but a measurable statement demonstrating what the student will be assessed on at the completion of the lesson. For instance, “understand” is not measureable, but “describe” and “identify” are.

For example:

Given an unlabeled map outlining the 50 states, students will accurately label all state names.

 

 

Academic Language In this section, include a bulleted list of the general academic vocabulary and content-specific vocabulary you need to teach. In a few sentences, describe how you will teach students those terms in the lesson.

 

 

 

 

 

 

 

 

Resources, Materials, Equipment, and Technology: List all resources, materials, equipment, and technology you and the students will use during the lesson. As required by your instructor, add or attach copies of ALL printed and online materials at the end of this template. Include links needed for online resources.

 

 

 

 

 

 

 

 

 

 

Section 2: Instructional Planning

Anticipatory Set

Your goal in this section is to open the lesson by activating students’ prior knowledge, linking previous learning with what they will be learning in this lesson and gaining student interest for the lesson. Consider various learning preferences (movement, music, visuals) as a tool to engage interest and motivate learners for the lesson.

In a bulleted list, describe the materials and activities you will use to open the lesson. Bold any materials you will need to prepare for the lesson.

 

For example:

· I will use a visual of the planet Earth and ask students to describe what Earth looks like.

· I will record their ideas on the white board and ask more questions about the amount of water they think is on planet Earth and where the water is located.

 

Time Needed
Multiple Means of Representation

Learners perceive and comprehend information differently. Your goal in this section is to explain how you would present content in various ways to meet the needs of different learners. For example, you may present the material using guided notes, graphic organizers, video or other visual media, annotation tools, anchor charts, hands-on manipulatives, adaptive technologies, etc.

In a bulleted list, describe the materials you will use to differentiate instruction and how you will use these materials throughout the lesson to support learning. Bold any materials you will need to prepare for the lesson.

 

For example:

· I will use a Venn diagram graphic organizer to teach students how to compare and contrast the two main characters in the read-aloud story.

· I will model one example on the white board before allowing students to work on the Venn diagram graphic organizer with their elbow partner.

 

 

 

 

 

 

 

 

 

 

 

Explain how you will differentiate materials for each of the following groups:

 

· English language learners (ELL):

 

 

 

· Students with special needs:

 

 

 

· Students with gifted abilities:

 

 

 

· Early finishers (those students who finish early and may need additional resources/support):

 

 

 

 

Time Needed
Multiple Means of Engagement

Your goal for this section is to outline how you will engage students in interacting with the content and academic language. How will students explore, practice, and apply the content? For example, you may engage students through collaborative group work, Kagan cooperative learning structures, hands-on activities, structured discussions, reading and writing activities, experiments, problem solving, etc.

In a bulleted list, describe the activities you will engage students in to allow them to explore, practice, and apply the content and academic language. Bold any activities you will use in the lesson. Also, include formative questioning strategies and higher order thinking questions you might pose.

 

For example:

· I will use a matching card activity where students will need to find a partner with a card that has an answer that matches their number sentence.

· I will model one example of solving a number sentence on the white board before having students search for the matching card.

· I will then have the partner who has the number sentence explain to their partner how they got the answer.

 

 

 

 

 

 

 

 

Explain how you will differentiate activities for each of the following groups:

· English language learners (ELL):

 

 

 

· Students with special needs:

 

 

 

· Students with gifted abilities:

 

 

 

· Early finishers (those students who finish early and may need additional resources/support):

 

 

 

 

Time Needed
Multiple Means of Expression

Learners differ in the ways they navigate a learning environment and express what they know. Your goal in this section is to explain the various ways in which your students will demonstrate what they have learned. Explain how you will provide alternative means for response, selection, and composition to accommodate all learners. Will you tier any of these products? Will you offer students choices to demonstrate mastery? This section is essentially differentiated assessment.

In a bulleted list, explain the options you will provide for your students to express their knowledge about the topic. For example, students may demonstrate their knowledge in more summative ways through a short answer or multiple-choice test, multimedia presentation, video, speech to text, website, written sentence, paragraph, essay, poster, portfolio, hands-on project, experiment, reflection, blog post, or skit. Bold the names of any summative assessments.

Students may also demonstrate their knowledge in ways that are more formative. For example, students may take part in thumbs up-thumbs middle-thumbs down, a short essay or drawing, an entrance slip or exit ticket, mini-whiteboard answers, fist to five, electronic quiz games, running records, four corners, or hand raising. Underline the names of any formative assessments.

For example:

Students will complete a one-paragraph reflection on the in-class simulation they experienced. They will be expected to write the reflection using complete sentences, proper capitalization and punctuation, and utilize an example from the simulation to demonstrate their understanding. Students will also take part in formative assessments throughout the lesson, such as thumbs up-thumbs middle-thumbs down and pair-share discussions, where you will determine if you need to re-teach or re-direct learning.

 

 

 

 

 

 

 

 

Explain how you will differentiate assessments for each of the following groups:

· English language learners (ELL):

 

 

 

 

· Students with special needs:

 

 

 

· Students with gifted abilities:

 

 

 

· Early finishers (those students who finish early and may need additional resources/support):

 

 

 

 

Time Needed
   
Extension Activity and/or Homework

Identify and describe any extension activities or homework tasks as appropriate. Explain how the extension activity or homework assignment supports the learning targets/objectives. As required by your instructor, attach any copies of homework at the end of this template.

 

 

 

 

 

 

Time Needed

 

 

© 2019. Grand Canyon University. All Rights Reserved.

Cyberbullying And The First Amendment

A student notifies you that she has been subjected to bullying through a classmate’s Facebook page. In 500-750-words, address the following:

  1. Steps you are required to take that are consistent with state statutes, your district’s school board policies, faculty handbook, and the student handbook;
  2. Any First Amendment arguments you think the student with the Facebook page may raise; and
  3. Responses you could make to the First Amendment arguments that are consistent with the cases in the assigned readings.

APA format is not required, but solid academic writing is expected.

This assignment uses a rubric. Please review the rubric prior to beginning the assignment to become familiar with the expectations for successful completion.

You are required to submit this assignment to Turnitin.

5 Case Studies Book Needed Leadership: Theory And Practice (Sixth Edition); By Peter G. Northouse,

answer the questions at the end of the case study 16.3 and 5.3 and 7.2  and 12.1

When answering the questions, students should incorporate key aspects of the lesson into answers and not just answer the question

 

 

http://caselaw.findlaw.com/us-5th-circuit/1158545.html

 

Click on the link above and read the LAXTON V. GAP INC. law case. After reading the case, please answer the following questions regarding the LAXTON V. GAP INC case:

1. What was the legal issue in this case?

2. What did the court decide?

3. What reason does the employer provide for Laxton’s termination?

4. What is the evidence of pretext in this case?

5. What is the evidence of a discriminatory motive in this case?

 

6. Do you agree with the court’s decision? Why or why not?

 

the fifth case study is attached

Case Study – Higher Education – Transformational Leadership

 

Dr. Jerri Stelars became the president of her alma mater 20 years after graduating. This was her dream job and she was well prepared to lead this university due to a distinguished career in higher education administration. Most of her professional expertise was in fund raising and creating relationships with influential people who could help the universities where she worked. Her passion for her alma mater coupled with her professional experiences were critically needed for a once proud but now deteriorating university. Although this public university appeared tired, it was still known for providing its constituents quality service and an affordable education.

Initially, Dr. Stelars easily won people over with her relational leadership skills and obvious passion for her university. Dr. Stelars’s charisma, especially when relating one-on-one to people, was infectious throughout the university from the maintenance personnel to the trustees who oversaw the university. Her passion and focus was to re-create the university into an institution that would attract the best students and elevate the university from a regional university to a force that competed with prestigious universities on a national level. Dr. Stelars was also extremely effective in conveying to influencers her passion to rebuild the campus that had been neglected over the past 30 years. Funds began flowing into the university from many and different sources, both private and governmental, which she used to rebuild the campus into one of the most beautiful in the region.

Dr. Stelars was also instrumental in establishing needed and desired academic programs on campus. Due to internal state politics, the university was not able to offer several programs. Through Dr. Stelars’s leadership she was able to influence state legislators to allow her university to offer these programs. Dr. Stelars also worked with other, more prestigious universities within the state, to offer innovative join programs that filled a void in the university’s curriculum. Both of these accomplishments were favorably received by the people in the region, but with neutral enthusiasm by the faculty.

Early in her dealings with the trustees, Dr. Stelars experienced a situation where the trustees questioned her judgment. Since that experience, Dr. Stelars ensured that all information and all actions were thoroughly vetted to ensure she gained the outcome for the university she thought needed from the trustees. Through these successful actions and the visual rebuilding of the campus, Dr. Stelars was able to influence the trustees to grant her an extended contract that guaranteed her present contract for the next decade. The trustees offered Dr. Stelars the contract as a preventative measure to ensure Dr. Stelars would not leave the university for another, more attractive offer. After being granted a long-term contract, Dr. Stelars became more confident and demanding of her desires with those who worked with her, but still maintained superb relationships with the trustees and outside influencers.

Throughout her time as president, Dr. Stelars made significant improvements to the university campus, but it was done on her terms, especially after securing her extended contract. Her influence with the trustees ensured any dissenting views were either not heard or were marginalized. Faculty became confused regarding the future of the university. Many still believed that the university needed to provide for the needs of the people in that region while others thought that the mission was to compete on a national level. Upon close examination, one could not identify the values or the vision of the university as multiple sets of values existed. The vision was captured in a tag line that sounded great, but was difficult to interpret. When asked to explain the vision of the university, Dr. Stelars would describe a somewhat vague image of the future. Although still extremely popular with the trustees, students and other constituents, faculty and staff were becoming more disgruntled. During the difficult economic conditions during the later part of the first decade of the 21st century, the university employees received minimal or no pay raises, but there was wide speculation that Dr. Stelars received substantial pay raises as a result of her extended contract. Enrollments that initially grew rapidly during the early years of Dr. Stelars’ tenure became stagnate. However, what was occurring with enrollments at the university was almost universal with the other universities within the region.

Dr. Stelars continued as president of her beloved university. Many constituents still saw Dr. Stelars as extremely effective as the campus continued its structural renaissance. However, there are others who began to question if Dr. Stelars was the leader needed to create a pathway into the future for the university.

 

1. Which of the five fundamental practices according to the Kouzes and Posner model of leadership does Dr. Stelars best exemplify?

 

2. Is Dr. Stelars a transformational or transactional leader and why?

 

3. Which aspect of transformational leadership (idealized influence, intellectual stimulation, individualized influence and inspirational motivation) does Dr. Stelars best exemplify and why?

 

4. How could a life-time contract for a key leader effect the transformational process within an organization?

 

5. Is Dr. Stelars a pseudotransformational leader?

Choosing An Observational System

Choosing an Observational System

As a leader in the field of early childhood education, it is important that you understand principles of effective assessment and appropriate strategies for documenting what you are observing. The actual observing of children is fairly straightforward, but observation without documentation only provides a partial picture. It is the detailed records kept over time that reveal growth in many areas. “There are multiple published and unpublished classroom observation systems available for use, and deciding among them is the first step in putting an observational system to work in your organization” (Stuhlman, Hamre, Downer & Pianta, n.d., 2). For this assignment, you will model, in the form of a presentation, the depth of your understanding of each type of documentation technique. This includes not only the advantages and disadvantages of each, but also proper procedure and appropriate times to use each.

To prepare for this assignment, read How to Select the Right Classroom Observation Tool. (Links to an external site.) Then, consider the following scenario as a basis for your presentation:

With all of the focus surrounding assessment recently, as the leader in your organization, you have decided now is a good time to provide professional development on this topic for your staff. Your goal is to help your staff conduct more effective observations in their classrooms.

Your presentation should include the following, using PowerPoint or Google Slides with voice narration:

  • Create a minimum of three learning objectives for your presentation. These objectives should guide your presentation and should be listed on the first slide.
  • Summarize each observation tool your center or school uses, including but not limited to, anecdotal records, running records, time sampling, and event sampling.
  • Describe how each observation type is used (for planning, etc.) and the benefits of each.
  • Determine what ethical issues must be considered when using these tools. Cite specific examples from the NAEYC Code of Ethical Conduct. (Links to an external site.)
  • Explain how each observation type fits within the “High Priority Questions” from How to Select the Right Classroom Observation Tool (p. 2) and, thus, guides your decision-making process.
  • Describe the specific areas of development each observation type is best suited for. Explain why.
  • Explain how each observation type will assist with the identification of developmental concerns, and describe the intervention strategies needed to support those concerns.
  • Using one of Colorado Department of Education videos, conduct a walk through (a step-by-step explanation) on how to observe one of those children using one of the four documentation forms of your choice.

Source Requirement:

  • At least two scholarly peer-reviewed or credible sources

Writing and Formatting Expectations:

  • Title Page: Must include the following:
    • Title
    • Student’s name
    • Course name and number
    • Instructor’s name
    • Date submitted
  • Academic Voice: Academic voice is used (avoids casual language, limited use of “I”, it is declarative).
  • Purpose and Organization: Demonstrates logical progression of ideas.
  • Syntax and Mechanics: Writing displays meticulous comprehension and organization of syntax and mechanics, such as spelling, grammar, and punctuation.
  • APA Formatting: Papers are formatted properly and all sources are cited and referenced in APA style as outlined
  • Suggested Assignment Length: Your power point should be 8 slides in length (not including title and reference slides).

    Part 3 of a 5 Part Series:

     

    A Practitioner’s Guide to

    Conducting Classroom

    Observations: What the Research Tells Us About

    Choosing and Using Observational Systems to

    Assess and Improve Teacher Effectiveness

    Megan W. Stuhlman, Bridget K. Hamre, Jason T. Downer, & Robert C. Pianta, University of Virginia This work was supported by a grant from the WT Grant Foundation.

    This booklet outlines key questions that can guide observational tool selection. It is intended to provide guiding questions that will help users

    organize their thinking about what they want from an observation tool and help them to find

    instruments well aligned with their strategic goals.

    How to Select The Right

    classroom ObservatiOn

    Tool

     

     

    2 : How to Select the Right Classroom Observation Tool

    Choosing the Right Observational Tool: Factors to Consider There are multiple published and unpublished classroom ob- servation systems available for use, and deciding among them is the first step in putting an observational system to work in your organization. The primary advantage of using an exist- ing observation tool is that it saves a great deal of time and resources that would need to be put into developing an in- strument with even minimal levels of reliability and validity for predicting outcomes of interest.

    When reviewing such tools, the following questions can be used to guide the decision-making processes regarding which observation system is best suited to the needs of a particular organization.

    Tier 1: High Priority Questions Has this tool been shown to produce reliable scores • across observers and over time?

    Are the outputs (scores) from this observation • protocol proven to relate to outcomes of interest in our population (i.e., growth in students’ academic skills, students’ prosocial behaviors, teacher retention, students’ reports of feelings of belonging, etc.)? In other words, is the instrument valid for our intended purpose?

    What questions about classrooms does my organization • want answered? Is the scope of this tool aligned with the questions about classrooms and teachers’ practices that we want to address?

    Are the observation and scoring protocols standardized • and clear? Tier 2: Additional Considerations Does the system include complementary sources of • information (such as student surveys, etc.) that could be used to obtain a more complete portrait of the classroom?

    Does the observation include guidelines and support for • using findings for professional development purposes?

    Is the time required for observation feasible for your • organization?

    Each of these questions is reviewed in more detail below.

     

    Does the observation include reliability information? Instrument reliability is a key consideration in selecting an ob- servational assessment tool. Instrument reliability means that whatever qualities a given tool is measuring, it should measure those qualities consistently. In observational assessments of classrooms, a tool that produces reliable scores will output the same score regardless of variation in the classroom that is outside of the scope of the tool and regardless of who is making the ratings.

    For example, just as a yardstick registers the same number of inches when measuring a given sheet of paper, regardless of whether that paper is measured during the day or at night, inside or outside, or who is holding the yardstick, a tool that measures teachers’ ability to promote student language should produce the same scores for the same behaviors, regardless of whether these behaviors occur during math or literacy, whole group or small group, and regardless of who is making the ratings.

    No observation of teaching practices will produce perfectly reliable scores. We know that despite high levels of training, observers will sometimes make different judgments. We also know that certain classroom activities may influence scores on observational tools. The goal is to choose an observational tool that can produce relatively high-reliability scores and to be aware of potential biases.

    There are several aspects of reliability. Perhaps the two most relevant when considering classroom observation systems are stability over time and consistency across observers. With regard to stability over time, assuming a goal is to detect con- sistent and stable patterns of teachers’ behaviors, users need to know that constructs being assessed represent a stable characteristic of the teacher across situations in the classroom and are not random occurrences or behaviors that are linked exclusively to the particular moment of observation. If ratings shift dramatically and randomly from one observation cycle or day or week to the next, these ratings are not likely to repre- sent core aspects of teachers’ practice.

    Conversely, if scores are at least moderately consistent across time, they likely represent something stable about the set of

    Key Concept –Reliability Look for instruments that provide scores that are:

    Consistent over time • unless change is expected. Consistent across observers.•

     

     

    How to Select the Right Classroom Observation Tool : 3

    skills that teachers bring into the classroom setting, and feed- back and support around these behaviors is much more likely to resonate with teachers and to function as useful levers for helping them change their practice. It is advantageous for ob- servational tools to provide information on their test-retest reliability or the extent to which ratings on the tool are consis- tent across different periods of time (within a day, across days, across weeks, etc).

    A notable exception around the criteria of stability over time as a marker for reliability is when teachers are engaged in professional development activities or are otherwise making intentional efforts to shift their practice. In these cases, as well as in cases where an organization’s curriculum is changing or new program-wide goals are being implemented, a lack of stability in observations of teacher behaviors may well repre- sent true change in core characteristics and not just random (undesired) fluctuation over time. In these cases, it would be desirable to collect data on the extent of change and specific areas where change is observed.

    With regard to stability across observers, in order for results of observations to be useful at scale, training protocols and provision of scoring directions must be clear and extensive enough to produce an acceptable level of agreement across observers. If there is very low agreement between two or more observers’ ratings of the same observation period, the degree to which the ratings represent the teachers’ behavior rather than the observers’ subjective interpretations of that behavior or personal preferences is unknown.

    Conversely, if two independent observers can consistently assign the same ratings to the same patterns of observed behaviors, this speaks to the fact that ratings truly represent attributes of the teacher as defined by the scoring system, as opposed to attributes of the observer. Therefore, users may wish to select systems for which there is documented consen- sus among trained raters on whether or not or to what extent teachers are engaging in the behaviors under consideration.

    Does the tool provide information on validity? Validity represents the degree to which the ratings produced by the observation system are associated with the student or teacher outcomes about which the observation is designed to provide information. Along with reliability considerations, validity is one of the most important aspects to consider when selecting an observation instrument. Different observation systems have varying levels of data available to show how closely aligned the outputs of observations are with students’ performance in a specified area, students’ growth on specified skill sets, or other outcomes of interest.

    Selecting instruments with demonstrated validity is critical to making good use of observational methodology because this information allows users to have confidence that the information they are gathering is relevant to the outcomes they are interested in, and that the types of behaviors outlined in the system can be held up as goals for high- quality teacher practice.

    Without validity information, users have no such assurances. We must know that our assessment tools are directly and meaningfully related to our outcomes of interest before we begin using them either in professional development or accountability frameworks.

    A system may well be valid for one set of outcomes but not for another, so clarity around outcomes of interest is important. For example, an observation system may include validity data regarding the prediction of students’ academic achievement during that school year, but it may demonstrate no relation to student drop-out rates in subsequent years. If the objective of conducting the observation is to evaluate whether teachers are engaging in behaviors that promote students’ learning over the course of the year, this instrument may be well-suited for that purpose. However, if the objective is to determine whether teachers are enacting behaviors that will prevent drop-out, a different observation with documented links to drop-out rates may be preferable.

    If a user has a particular observation tool that is well aligned with the questions they want answered about classroom practice and meets the criteria summarized previously, there is always the possibility that no data will be available on validity for the particular outcomes that the user is interested in evaluating. In these instances, it would certainly be possible to use the observation in a preliminary way and evaluate whether it is, in fact, associated with outcomes of interest. For example, a district or organization could conduct a pilot test with a subgroup of teachers and students to determine whether scores assigned using the observation tool are associated with the outcomes of interest. This testing would provide some basis for using the instrument for accountability or evaluative purposes.

    In sum, the importance of selecting an observation system that includes validity information cannot be overstated. It may be more difficult to find instruments that have been

    Key Concept –Reliability Look for instruments that provide scores that are:

    Consistent over time • unless change is expected. Consistent across observers.•

    Key Concept –Validity Look for instruments that provide scores with proven links to outcomes of interest.

     

     

    4 : How to Select the Right Classroom Observation Tool

    validated for your purposes, but this is truly essential for making observational methodology a useful part of teacher evaluation and support programs. If the teacher behaviors that are evaluated in an observation are known to be linked with desired student outcomes, teachers will be more willing to reflect on these behaviors and buy in to observationally-based feedback, teacher educators and school personnel can feel confident establishing observationally-based standards and mechanisms for meeting those standards, and educational systems, teachers, and students will all benefit.

    What questions about classrooms do I want answered? Do the scope and design of the instrument lend themselves to addressing these questions? Scope of Observations. Different instruments provide us- ers with different types of information about classrooms. Some are inclusive of multiple varied aspects of teaching practice, pro- viding data on layers of setting quality including the physical environment, the types of activities observed in the classroom, and the teacher’s execution of professional responsibilities such as record keeping and communicating with families.

    Others adopt a highly focused approach, such as exclusively attending to a highly detailed and specific set of instructional in- teractions that take place within short observation windows or focusing on comparisons between the experiences of specific groups of students within the classroom.

    Still others strike a balance in terms of scope, including infor- mation on a variety of teacher and student behaviors but not including information that would require knowledge outside of what is obtained during specified observation windows (i.e., not including how the teacher communicates with parents, makes lesson plans, etc.).

    Users may wish to begin the selection process by defining the goals that their organization has in using an observation tool. After having defined the desired outcome, users can select a measurement tool that is well aligned with their objectives.

    Age Range Covered. In addition to ensuring a match between the scope of what is assessed by the instrument and system goals, users are also advised to attend to the age range that the instrument was designed for and the grade levels from which data on the psychometric properties of the instrument have been obtained. For example, if your goal is to assess

    fourth-grade classrooms, it is ideal to use an instrument that was generated with this developmental level in mind and has been validated for use with this age group.

    Global Versus Content Specific. Relatedly, some users may want to focus more on the provision of general support for learning, whereas others may have programmatic goals that focus more specifically on quality of instruction in different content areas such as mathematics or reading. There are instruments available that assess implementation of content- specific learning supports, as well as tools that focus on supports linked to student growth and development across content areas. If your organization has a particular interest in a certain content area, you may wish to supplement a protocol for observing generalized supports with one that includes specific interactive practices relevant to your content area of focus.

    CASE STUDY # 1: Choosing an Observation Tool for a Specific Curricula

    The Fairmont school district is considering mandating the use of a new mathematics curriculum in all of its schools. A small number of teachers who are pilot testing the new curriculum have been trained on this approach to teach- ing mathematics and have been provided with all needed materials. The district now wants to evaluate the extent to which teachers using this curriculum are incorporating high-quality strategies for teaching mathematics in compar- ison with the extent to which teachers in a control group of schools are incorporating such strategies in teaching mathematics in order to help them decide whether this curriculum may be a good choice for district-wide use.

    This school district may wish to use an observation proto- col focused on research-based definitions and descriptions of high-quality mathematics instruction or to supplement a more generalized observational protocol with a content- specific protocol for mathematics instruction.

    CASE STUDY # 2: Choosing an Generalized Observational Tool

    The Lakeview school district wishes to conduct an ob- servational assessment of all teachers in order to gain a better understanding of system-wide areas of strength and challenge so that they can plan for in-service programming and create individualized professional development plans for teachers. Observers will conduct multiple observations per day, so these observations will occur at different times of day and during different activities for different teachers.

     

     

    How to Select the Right Classroom Observation Tool : 5

    Global Rating Methodology Versus Frequency Counts of Behaviors. An additional consideration that falls in this scope category concerns the degree to which observational systems capture information on the frequencies of certain teacher behaviors or on more holistically defined patterns of behavior. Measures using time-sampling methodology ask users to count the number of specific types of behaviors observed. Global rating methodology guides users to watch for patterns of behavior and make summative judgments about the presence or absence of these behaviors.

    Examples of behaviors assessed by time-sampling measures include: time spent on literacy instruction, the number of times teachers ask questions during instructional conversations, and the number of negative comments made by peers to one another. In contrast, global rating systems may assess the degree to which literacy instruction in a classroom matches a description of evidence-based practices, the extent to which instructional conversations stimulate children’s higher-order thinking skills, and the extent to which classroom interactions contain a high degree of negativity, both between teachers and students and among peers.

    There are advantages and disadvantages to each type of system. An advantage to global ratings is that they assess

    higher-order organizations of behaviors in ways that may be more meaningful than looking at the discrete behaviors in isolation. For example, teachers’ positive emotions and smiling can have different meanings and may be interpreted differently depending on the ways in which students in the classroom respond. In some classrooms teachers are exceptionally cheerful, but their emotions appear very disconnected from those of the students. In other classrooms teachers are more subdued in their expressed positive emotions but there is a clear match between this level of emotional expression and that of the students.

    A measure that simply counted the number of times a teacher smiled at students would miss these more nuanced interpretations. However, an instrument characterized by time-sampling methods, with a focus on frequencies of specific behaviors, may lend itself well to easy alignment with the evaluation of certain interventions. For example, if a goal is to increase the numbers of times that teachers provide students with specific and focused feedback rather than giving no feedback or simply saying “yes” or “no,” an instrument using time-sampling methods could provide very concrete data on the extent to which an intervention impacted this specific behavior by counting the frequencies of specific and focused feedback before and after the intervention (or in classrooms that did and did not receive the intervention). Similarly, the success of an intervention designed to increase the amount of time spent in learning activities (versus “down time”) could be specifically evaluated using time-sampling methods as well.

    One other difference between these two approaches concerns the degree to which they are subject to observer effects. There tend to be more significant observer effects using global ratings than time-samplings of more discrete behaviors. This finding is not surprising given that global ratings tend to require greater levels of inference than do frequency approaches. Counting the number of times a teacher smiles requires much less inference

    This district would likely benefit from use of a protocol designed to assess generalized supports for learning that produce benefits for student development across content areas, as not all teachers will be observed teaching the same content areas.

    CASE STUDY # 3: Choosing an Observational Tool for Merit Pay and Tenure

    Franklin County school district wants to outline a structure for merit pay and tenure decisions that includes quality of observed teaching behaviors as one of their components. Therefore, the county decides to select an assessment instrument that has shown a relationship to student out- comes at different levels of quality. In other words, one with research support demonstrating that incremental gains in the quality of the measured teaching practices re- sult in incremental gains in student performance.

    They then stipulate two options for sufficient practice in this component: 1) teachers demonstrate high-quality teaching practices in initial and follow-up assessments, or 2) teachers demonstrate improvement over time in quality of teaching practices/positive response to professional development support as indicated by increasing scores over time.

    Key Concept –Observational Methods Time-Sampling Methodology/Frequency Counts: most adept at highlighting differences within a specific teacher’s practices during different specific teaching activities.

    Global Rating Methodology: most adept at highlighting stable teacher characteristics and at providing information that differentiates between teachers.

     

     

    than does making a holistic judgment about the degree to which a teacher fosters a positive classroom climate. This point emphasizes the need for adequate training and strategies for maintaining reliability among classroom observers, issues considered in greater detail in the next sections.

    Another factor to consider is how much of the variance in these ratings can be attributed to stable characteristics of the classroom versus factors that change over time as a result of subject matter, number of students, time of day, etc. Evidence suggests that time-sampled codes show little classroom-level variance, in contrast to global ratings, in which the bulk of the variance was at the classroom level. This indicates that the time-sampled codes are not as sensitive to differences between teachers and classrooms as are the global ratings. This is an important consideration for users interested in obtaining information about different teachers’ individualized strengths and areas of challenge.

    Is the instrument standardized in terms of administration procedures? Does it offer clear directions for conducting observations and assigning scores? Once you have clarified your purpose and goals in conducting classroom observations, it is important to select an observa- tion system that provides clear instructions for use, both in terms of how to set up and conduct observations and how to assign scores. This is an essential component of a useful obser- vation system: without standardized directions to follow, differ- ent people are likely to use different methods, which severely limits the potential for agreement between observers when making ratings, and thus hampers system-wide applicability.

    There are three main components of standardization that us- ers may consider evaluating in an observation instrument: 1. training protocol; 2. observation protocol; 3. scoring directions

    Training Protocol. With regard to the training proto- col, are there specific directions for learning to use the in- strument? Is there a comprehensive training manual or user’s guide? Are there videos or transcripts with gold stan- dard scores available that allow for scoring practice? Are there other procedures in place that allow for reliability checks such as having all or a portion of observers rate the same classroom (live, via video, or via transcript) to ensure

    that their scoring is consistent? Are there guidelines around training to be completed before using the tool (i.e., do all ob- servers need to pass a reliability test, observe in a certain number of classrooms, be consistent with colleagues at a cer- tain level)?

    Observation Protocol. Users are also advised to look for direction and standardization in terms of the length of observations, the start and stop times of observations (are there predetermined times, times connected with start and end times of lessons/activities, or some other mechanism for determining when to begin and end?), direction around time of day or specific activities to observe, as well as whether ob- servations are announced or unannounced, and other related issues.

    Scoring Directions. With regard to scoring, users are advised to look for clear guidelines. Do users score during the observation itself or after the observation. Is there a pre- defined observe/score interval? How are scores assigned? Is there a rubric that guides users in matching what they observe with specific scores or categories of scores (i.e., high, moder- ate, low)? Are there examples of the kinds of practices that would correspond to different scores? Are scores assigned based on behavior counts or qualitative judgments? How are summative scores created and reported back to teachers?

    6 : How to Select the Right Classroom Observation Tool

    CASE STUDY # 4: Importance of Observational Protocols

    A teacher preparation program is looking for a way to as- sess students’ performances at the beginning and end of their student teaching work, during which time they are also taking a course on effective teaching practice. They find “Observational Protocol A,” which has six clearly defined, theoretically based, 10-point scales that observers use to rate teacher practice. Several members of the faculty read the definition of the six scales and agree that the teaching behaviors the scale assesses are aligned with the course objectives, as well as the broader goals of the program, and therefore would be good targets for assessment. However, the system does not include training or observational pro- tocols or explicit directions for scoring. As a consequence, it is used quite differently by two faculty members.

    When Professor Jones makes observations, he has ar- ranged the observation time in advance with the teachers. He arrives at the appointed time, but does not begin the observation until he can tell that the teacher is ready to begin the lesson. He ends the observation as the teacher ends the lesson. He takes detailed notes about the teach- ers’ practice along the six dimensions. When scoring, he reasons that if he sees teachers engaging in the behaviors under consideration several times, they should get “full

     

     

    The four preceding factors represent key areas to consider when selecting an observation tool. Above and beyond these core factors, other potential considerations include:

    Does the system include complementary sources of information? Obtaining information about classrooms from multiple sourc- es and from different perspectives (e.g., the teachers’ own perspective, students’ perspectives, perspective of someone generally familiar with the classroom on a routine basis) can provide a more comprehensive picture of the classroom en- vironment. This can also be helpful in terms of providing con- structive feedback – one could seek out coherent patterns in responses across observers/raters.

    For example, having a teacher engage in a self-study or self- assessment in conjunction with structured observations made by neutral observers may be a useful way of facilitating goal setting and problem solving with teachers. Likewise, obtaining students’ perspectives can be an invaluable resource in un-

    derstanding how specific teacher behaviors impact students’ subjective experiences of the classroom.

    Does the observation include guidelines and support for using findings for professional development purposes? As the goals of conducting observations include not only gathering information on the quality of classroom processes but also using that information to help teachers improve their practices (and, eventually, student outcomes), choosing obser- vation systems that include a protocol to assist in translating observation data into professional development planning is desirable. Information such as national norms and threshold scores defining “good enough” levels of practice (levels of quality that result in student improvement), or expected im- provements in response to intervention would be extremely useful to have, although few, if any, instruments currently pro- vide this kind of information to users.

    Also useful are guidelines or frameworks for reviewing results with teachers, suggested timelines for professional develop- ment work, protocols that can be given to teachers, placed in files, and be easily translated into system-wide databases and handouts with suggested competence-building techniques. Few observation systems provide these types of resources at this time.

    Is the time demand for conducting the observation workable within my system? Different school systems have different resources available to devote to classroom observation. Some schools have person- nel available to spend full days in classrooms in order to obtain data on important aspects of classroom functioning. Other school systems have less time available on a per classroom basis. In selecting an observational assessment instrument, it is vitally important that the instrument is used in practice in the same standardized ways it was used in development in order to obtain results with the expected levels of reliability and va- lidity. Some instruments have been tested and validated using

    How to Select the Right Classroom Observation Tool : 7

    Key Concept –Standardization Procedures Observations should be standardized around:

    Training protocol• Observation protocol• Scoring directions.•

    credit,” or a 10, on the scale. Professor Allen also conducts observations using the same well defined scales, but her visits are unannounced. She typically arrives at the begin- ning of the school day and begins taking notes as soon as she arrives, and observes for two consecutive hours, re- gardless of start and stop time of activities. In terms of scoring, she reasons that teachers start at a “1” level and she moves the score up a point on the scale every time the teacher successfully engages in the behavior under consid- eration. Given these differences in protocol, it is likely that Professor Jones’ scores could be systematically higher than Professor Allen’s.

    We can see from this example that even with well de- fined and theoretically sound scales, a clear observation and scoring protocol that all observers follow is extremely important in terms of obtaining scores that are consistent across observers. In this example, note that significantly different scores are likely to result from Professor Jones’ observations and Professor Allen’s observations as a result of their different administration and scoring techniques, and that these scores may or may not reflect real differences between the two teachers they observed. For example, if Professor Jones used his interpretation of the protocol to conduct initial start-of-student-teaching observations and Professor Allen used her interpretation of protocol to conduct the end-of-student-teaching observations, any true gains in teaching practice could be obscured, and the preparation program might conclude that the course and teaching experience did not function as effective prepa- ration when in fact, if the teachers were evaluated using the same protocol on both measurement occasions, they might have shown improvements.

     

     

    8 : How to Select the Right Classroom Observation Tool

    Megan W. Stuhlman, Bridget K. Hamre, Jason T. Downer, & Robert C. Pianta, University of Virginia This work was supported by a grant from the WT Grant Foundation.

    longer periods of observation than others. Users may wish to generate a realistic approximation of how they will be able to allocate observation time before selecting an assessment tool. An instrument that can be used reliably and with validity within the parameters of that time budget can then be selected.

    The University of Virginia Center for Advanced Study of Teaching and Learning (CASTL) focuses on the quality of teaching and students’ learning. CASTL’s aim is to improve educational outcomes through the empirical study of teaching, teacher quality, and classroom experience from preschool through high school, with particular emphasis on the challenges posed by poverty, social or cultural isolation, or lack of community resources.