Assignment: Analyzing Focus Group Findings

Imagine that two focus groups have been conducted in an Asian American and immigrant community in a large urban city. The rationale of conducting the qualitative study was because it has been noted that many Asian Americans and immigrants are reluctant to seek mental health services. To further understand this issue, service providers including social workers, counselors, doctors, and nurses were recruited to discuss the barriers in implementing mental health services targeted to Asian Americans and immigrants. After the focus groups were transcribed, two research assistants were hired to conduct a content analysis of the transcripts. Refer to the Week 5 Handout: Content Analysis of Focus Groups.

As the social worker, you have been asked to analyze the focus group data and are charged with working with an advisory board in the community to formulate social work practice recommendations using the ecological model.

To prepare for this Assignment, review Week 5 Handout: Content Analysis of Focus Groups.

By Day 7

Submit a 3-4-page report of the following:

  1. Discuss the themes found in the Week 5 Handout: Content Analysis of Focus Groups. Based on this data, what is your analysis of the current barriers to services?
  2. Create two social work recommendations to address a current barrier and explain how the recommendation proposed addresses the findings.
  3. Discuss how you would collaborate with the research stakeholders (e.g. service providers and community members) to ensure that the data are interpreted accurately and that the practice recommendations made will be culturally appropriate.
  4. Critically reflect on your own culture and explain how your cultural values and beliefs may have influenced how you interpreted the focus group data. What specific cultural knowledge do you think you need to obtain to conduct culturally sensitive research with this group?

Support the assignment with references using assigned readings and/or additional scholarly literature.

I attached some of the resources provided for this week, however, feel free to add any other relevant ones you would like to utilize. thank you so much.

Assignment: Developing A Program Evaluation

**ATTACHED ARE SOME RESOURCES

A PROGRAM I WOULD LIKE OF FOCUS IS  Federal housing programs  THAT HELP HOMELESS. THANKS.

To ensure the success of a program evaluation, a social worker must generate a specific detailed plan. That plan should describe the goal of the evaluation, the information needed, and the methods and analysis to be used. In addition, the plan should identify and address the concerns of stakeholders. A social worker should present information about the plan in a manner that the stakeholders can understand. This will help the social worker receive the support necessary for a successful evaluation.

To prepare for this Assignment, identify a program evaluation you would like to conduct for a program with which you are familiar. Consider the details of the evaluation, including the purpose, specific questions to address, and type of information to collect. Then, consider the stakeholders that would be involved in approving that evaluation. Review the resources for samples of program evaluations.

By Day 7

Submit the following:

  • A 1-page stakeholder analysis that identifies the stakeholders, their role in the agency and any concerns that they might have about the proposed program evaluation
  • A 2- to 3-page draft of the program evaluation plan to submit to the stakeholders that:
    • Identifies the purpose of the evaluation
    • Describes the questions that will be addressed and the type of information that will be collected
    • Addresses the concerns of the stakeholders that you identified in your Stakeholder Analysis

      Program Evaluation Studies

      TK Logan and David Royse

      A variety of programs have been developed to address social problems such as drug addiction, homelessness, child abuse, domestic violence, illiteracy, and poverty. The goals of these programs may include directly addressing the problem origin or moderating the effects of these problems on indi- viduals, families, and communities. Sometimes programs are developed

      to prevent something from happening such as drug use, sexual assault, or crime. These kinds of problems and programs to help people are often what allracts many

      social workers to the profession; we want to be part of the mechanism through which society provides assistance to those most in need. Despite low wages, bureaucratic red tape, and routinely uncooperative clients, we tirelessly provide services tha t are invaluable but also at various Limes may be or become insufficient or inappropriate. But without conducting eva luation, we do not know whether our programs are helping or hurting, that is, whether they only postpone the hunt for real solutions or truly construct new futures for our clients. This chapter provides an overview of program evaluation in gen – eral and outlines the primary considerations in designing program evaluations.

      Evaluation can be done informally or formally. We are constantly, as consumers, infor- mally evaluating products, services, and in formation. For example, we may choose not to return to a store or an agency again if we did not evaluate the experience as pleasant. Similarl y, we may mentally take note of unsolicited comments or anecdotes from clients and draw conclusions about a program. Anecdotal and informal approaches such as these gen- erally are not regarded as carrying scientific credibility. One reason is that decision biases play a role in our “informal” evaluation. Specifically, vivid memories or strongly negative or positive anecdotes will be overrepresented in our summaries of how things are evaluated. This is why objective data are necessary to truly understand what is or is not working.

      By contrast, formal evaluations systematically examine data from and about programs and their outcomes so that better decisions can be made about the interventions designed to address the related social problem. Thus, program evaluation involves the usc of social research meLhodologies to appraise and improve the ways in which human services, poli- ci~s, and programs are co nducted. Formal eva l.uation, by its very nature, is applied research.

      Formal program evaluations attempt to answer the following general ques tion: Does the p rogram work? Program evaluation may also address questions such as the following: Do our clients get better? How does our success rate compare to those of other programs or agencies? Can the same level of success be obtained through less expensive means?

      221

       

       

      222 PART II • QUANTITATIVE A PPROACHES: TYPES OF STUD IES

      What is the experience o f the typical client? Sho uld this prog ram be terminated and its funds applied elsewhere?

      Ideally, a tho rough program eval uation would address more complex questions in three main areas: (1) Does the program produce the intended outcomes and avoid unin- tended negative o u tcomes? (2) For whom does the program work best and un der what conditions? and (3) Ilow well was a p rogram model developed in one setting adapted to another setti ng?

      Evaluation has taken an especially p rominent role in practi.ce today because o f the focu~ on evidence-based practice in social programs. Social work, as a profession, has been asked to use evidence-based practice as an ethical obligation (Kessler, Gira, & Poertner, 2005). Evidence-based practice is defined diLTerently, but most definit ions include using program evaluation data to help determine best practices in whatever area of social programming is being considered. In other words, evidence-based practice incl udes using objective indica- tors of success in addition to p ractice or more subjective indicators of success.

      Formal program evaluations can be found on just about every topic. For instance, Fraser, Nelson, and Rivn rd ( 1997) h ave examined th e effectiveness of family preservation services; Kirby, Korpi, Adivi, and Weissman ( 1997) have evalu ated an AIDS and preg- nancy prevention middle school program. Mo rrow- Howell, Beeker-Kemppainen, and Judy ( 1998) evaluated an interven tion designed to reduce the risk of suicide in elderl y adult clients of a crisis hotline. Richter, Snider, and Gorey ( 1997) used a quasi-experimental design to study the effects of a g roup work interven tio n on female sur vivors of childho od sexual abuse. Leukefeld and colleagues ( 1998) examined the effects of an I IlV prevention intervention with injecting drug and crack users. Logan and colleagues (2004) examin ed the effects of a drug co urt in terven tion as well as the costs of drug co urt compared with t he economic benefits of the drug court progra m.

      Basic Evaluation Considerations

      Before beginning a program eva luntion, several issues must be initially considered. These issues are decisions 1 hat are critical in determining the evaluation methodology and goals. Although you may not have complete answers to th ese qu estions when beginning to plan a n evaluation, these ques tion s help in developing th e plan and must be answered before a n evaluation ca n be carried out. We can 1.um up these considerations with the following questions: who, what, where, when, and why.

      First, who will do the evaluation? This seems like a simple question at first glance. llowever, this particular consideration has major implications for the evaluation results. P rogram evaluators ca n be categorized as being either in ternal or external. An internal evaluator is someone who is a program staff member or regular agency employee, whereas an external evaluator is a professional, on contract, hired for the specific purpose of evalu- a tion. Th ere are adva ntages nnd disa dvan tages to using either type of evaluato r. For example, the internal evaluator probably will be very familia r with the staff and the program . This may save a lot of planning time. The d isadvnn tage is that eva luatio ns com- pleted by an internal eva luator may be considered less valid by outside agencies, including the funding source. The external evaluator gene rally is thought to be less biased in terms of evaluation outcomes beca use he or she has no persona l investment in the program. One disadvantage is that an externa l evaluator frequently is viewed as an “o utsider” by the staff w ithin an agency. This may affect the amount of time necessar)’ to conduct the eva lua tion or cause problems in the overall evaluation if agency staff are reluctant to cooperate.

       

       

      CHAPTER 13 • P ROGRAM E VALUATION S1 UD I ES 223

      Second, what resources are available to conduct the evaluation? Hiring an outside eval- uator ca n be expensive, whi le having a staff person conduct the evaluation m ay be less expensive. So, in a sense, you may be trading credibility for less cost. In fact, each method- ological decision will have a trade-off in credibility, level of information, and resources (including time and mo ney). Also, t he amount and level of infor mation as well as the research design \ .. ciU be determined, to some e11.”1ent, by what resources are available. A comprehensive and rigorous eval uation does take significant resources.

      Third, where will the information come from? If an eval uation can be done using exist- ing data, the cost will be lower than if data must be collected from numerous people such as clien ts and/or staff across m ultiple sites. So having some sense of where the data will come from is important.

      Fou rth, when is the evaluation information needed? In o ther wo rds, what is the time- fra me for the evaluation? The timeframe will affect costs and design of research methods.

      Fifth, why is the evaluation being conducted? Is the evaluation being conducted at the request of th e fun ding so urce? Is it being cond ucted to improve services? Is it being con- ducted to document the cost-benefit trade-off of the program? If future program funding decisions will depend on the results of the evaluation, then a lot more importance will be attache d to it than 1f a new manager simply wants to know whether clients were satisfied with services. The more that is riding on an evaluation, the more attention will be given to the methodology and the more threa tened staff ca n be, especially if they think that th e purp ose of the evaluation is to down size and trim excess employees. In other words, there arc many reasons an evaluation is being considered, and these reasons may have implica- tions for the evaluati on methodology and implemen tation.

      Once the issues described above have been considered, more complex questions and trade-offs will be needed in planning the evaluation. Specifically, six ma in issues guide and shape the design of any program evaluation effort and m ust be given thoughtful and delib erate consideration.

      L Defining the goal of the program evaluation

      2. Un dersta ndi ng the level of infor mation needed for the program evaluation

      3. Determining the methods and analysis that need to be used for the program evaluation

      4. Consider in g issues that might a ri se and strategies to keep the eval uation on course

      5. Developing results into a useful fo rm at for the program stakeholders

      6. Providing practical and useful feedback about the program strengths and weak- nesses as well as providing infor matio n about next steps

      Defining the Goal of the Program Evaluation

Definition of the facets of understanding

One popular approach to lesson planning for all levels of education is backward design or the backward planning approach. The idea behind backward design/backward planning is that you teach and plan toward the end goal, or what you want the students to be able to know or do. This typically ensures that the content being taught stays focused and more organized, thus promoting better understanding for the students.

*If you are not familiar with the Backward Design or Backward Mapping, I encourage you to go to YouTube and look up videos on these concepts.

reinforces the learning of backward design and provides an opportunity to develop a planning template for future use.

Part 1: Backward Design Guide

Create a 350- to 525-word, guide that explains the use of backward design in lesson planning.

Include the following in your guide:

  • Explain backward design using these terms:
  • Definition of the facets of understanding
  • Explanation
  • Interpretation
  • Application
  • Perspective
  • Empathy
  • Self-knowledge
  • Describe the 3 stages of backward design:
  • Desired results
  • Assessment evidence
  • Learning plan

Discussion Questions You Must Access The Following Article To Answer The Questions: Boyle, D. K.,

Discussion Questions

You must access the following article to answer the questions:

Boyle, D. K., & Thompson, S.A. (2020). CMSRNs’ continuing competence methods and perceived value of certification: A descriptive study. MEDSURG Nursing, 29(4), 229-254.  https://chamberlainuniversity.idm.oclc.org/login?url=https://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=145282314&site=eds-live&scope=site (Links to an external site.)

1. Locate the literature review section. Summarize using your own words from one of the study/literature findings. Be sure to identify which study you are summarizing.

2. Discuss how the author’s review of literature (studies) supported the research purpose/problem. Share something that was interesting to you as you read through the literature review section.

3. Describe one strategy that you learned that would help you create a strong literature review/search for evidence. Share your thoughts on the importance of a thorough review of the literature.