What are the “benefits and costs” from Fiona’s perspectives?

Assignment 1: Ethical Dilemma

This module taught you that ethical practice during the evaluation process is vital to the reliability of the evaluation that is produced. As in most areas of the helping professions, the ethical principles are not absolute, leading to an evaluator’s careful consideration of the actions he or she takes and the strategies he or she employs.

Tasks:

Using the Argosy University online library resources and the Internet, research and read about the assignment topic. In a minimum of 250 words, respond to the following:

  • Analyze Fiona’s case (ATTACHED) from your textbook. What are the ethical issues that you believe she is facing?
  • What are the “benefits and costs” from Fiona’s perspectives?
  • What are the “benefits and costs” from the agencies’ perspectives?
  • If you were Fiona, what would be your decision (conduct the evaluation in-house or contract out)? What is your rationale for this decision?

Submission Details:

  • By Sunday, July 31, 2016, post your responses to this Discussion AreaAppendix A: Fiona’s Choice: An Ethical Dilemma for a Program Evaluator

    Fiona Barnes did not feel well as the deputy commissioner’s office door closed behind her. She walked back to her office wondering why bad news seems to come on Friday afternoons. Sitting at her desk, she went over the events of the past several days and the decision that lay ahead of her. This was clearly the most difficult situation that she had encountered since her promotion to the position of director of evaluation in the Department of Human Services.

    Fiona’s predicament had begun the day before, when the new commissioner, Fran Atkin, had called a meeting with Fiona and the deputy commissioner. The governor was in a difficult position: In his recent election campaign, he had made potentially conflicting campaign promises. He had promised to reduce taxes and had also promised to maintain existing health and social programs, while balancing the state budget.

    The week before, a loud and lengthy meeting of the commissioners in the state government had resulted in a course of action intended to resolve the issue of conflicting election promises. Fran Atkin had been persuaded by the governor that she should meet with the senior staff in her department, and after the meeting, a major evaluation of the department’s programs would be announced. The evaluation would provide the governor with some post-election breathing space. But the evaluation results were predetermined—they would be used to justify program cuts. In sum, a “compassionate” but substantial reduction in the department’s social programs would be made to ensure the department’s contribution to a balanced budget.

    As the new commissioner, Fran Atkin relied on her deputy commissioner, Elinor Ames. Elinor had been one of several deputies to continue on under the new administration and had been heavily committed to developing and implementing key programs in the department, under the previous administration. Her success in doing that had been a principal reason why she had been promoted to deputy commissioner.

    On Wednesday, the day before the meeting with Fiona, Fran Atkin had met with Elinor Ames to explain the decision reached by the governor, downplaying the contentiousness of the discussion. Fran had acknowledged some discomfort with her position, but she believed her department now had a mandate. Proceeding with it was in the public’s interest.

    Elinor was upset with the governor’s decision. She had fought hard over the years to build the programs in question. Now she was being told to dismantle her legacy—programs she believed in that made up a considerable part of her budget and person-year allocations.

    In her meeting with Fiona on Friday afternoon, Elinor had filled Fiona in on the political rationale for the decision to cut human service programs. She also made clear what Fiona had suspected when they had met with the commissioner earlier that week—the outcomes of the evaluation were predetermined: They would show that key programs where substantial resources were tied up were not effective and would be used to justify cuts to the department’s programs.

    Fiona was upset with the commissioner’s intended use of her branch. Elinor, watching Fiona’s reactions closely, had expressed some regret over the situation. After some hesitation, she suggested that she and Fiona could work on the evaluation together, “to ensure that it meets our needs and is done according to our standards.” After pausing once more, Elinor added, “Of course, Fiona, if you do not feel that the branch has the capabilities needed to undertake this project, we can contract it out. I know some good people in this area.”

    Fiona was shown to the door and asked to think about it over the weekend.

    Fiona Barnes took pride in her growing reputation as a competent and serious director of a good evaluation shop. Her people did good work that was viewed as being honest, and they prided themselves on being able to handle any work that came their way. Elinor Ames had appointed Fiona to the job, and now this.

    Your Task

    Analyze this case and offer a resolution to Fiona’s dilemma. Should Fiona undertake the evaluation project? Should she agree to have the work contracted out? Why?

    In responding to this case, consider the issues on two levels: (1) look at the issues taking into account Fiona’s personal situation and the “benefits and costs” of the options available to her and (2) look at the issues from an organizational standpoint, again weighing the “benefits and the costs.” Ultimately, you will have to decide how to weigh the benefits and costs from both Fiona’s and the department’s standpoints.

    REFERENCES

    Abercrombie, M. L. J. (1960). The anatomy of judgment: An investigation into the processes of perception and reasoning. New York: Basic Books.

    Altschuld, J. (1999). The certification of evaluators: Highlights from a report submitted to the Board of Directors of the American Evaluation Association. American Journal of Evaluation20(3), 481–493.

    American Evaluation Association. (1995). Guiding principles for evaluators. New Directions for Program Evaluation, 66, 19–26.

    American Evaluation Association. (2004). Guiding principles for evaluators. Retrieved from http://www.eval.org/Publications/GuidingPrinciples.asp

    Ayton, P. (1998). How bad is human judgment? In G. Wright & P. Goodwin (Eds.), Forecasting with judgement (pp. 237–267). Chichester, West Sussex, UK: John Wiley.

    Bamberger, M., Rugh, J., & Mabry, L. (2012). Real world evaluation: Working under budget, time, data, and political constraints (2nd ed.). Thousand Oaks, CA: Sage.

    Basilevsky, A., & Hum, D. (1984). Experimental social programs and analytic methods: An evaluation of the U.S. income maintenance projects. Orlando, FL: Academic Press.

    Berk, R. A., & Rossi, P. H. (1999). Thinking about program evaluation (2nd ed.). Thousand Oaks, CA: Sage.

    Bickman, L. (1997). Evaluating evaluation: Where do we go from here? Evaluation Practice18(1), 1–16.

    Brandon, P., Smith, N., & Hwalek, M. (2011). Aspects of successful evaluation practice at an established private evaluation firm. American Journal of Evaluation32(2), 295–307.

    Campbell Collaboration. (2010). About us. Retrieved from http://www.campbellcollaboration.org/about_us/index.php

    Campbell, D. T. (1991). Methods for the experimenting society. Evaluation Practice12(3), 223–260.

    Canadian Evaluation Society. (2012a). CES guidelines for ethical conduct. Retrieved from http://www.evaluationcanada.ca/site.cgi?s=5&ss=4&_lang=en

    Canadian Evaluation Society. (2012b). Program evaluation standards. Retrieved from http://www.evaluationcanada.ca/site.cgi?s=6&ss=10&_lang=EN

    Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada, & Social Sciences and Humanities Research Council of Canada. (2010). Tri-council policy statement: Ethical conduct for research involving humans, December 2010. Retrieved from http://www.pre.ethics.gc.ca/pdf/eng/tcps2/TCPS_2_FINAL_Web.pdf

    Chen, H. T., Donaldson, S. I., & Mark, M. M. (2011). Validity frameworks for outcome evaluation. New Directions for Evaluation2011(130), 5–16.

    Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues for field settings. Chicago, IL: Rand McNally.

    Cook, T. D., Scriven, M., Coryn, C. L., & Evergreen, S. D. (2010). Contemporary thinking about causation in evaluation: A dialogue with Tom Cook and Michael Scriven. American Journal of Evaluation31(1), 105–117.

    Cooksy, L. J. (2008). Challenges and opportunities in experiential learning. American Journal of Evaluation29(3), 340–342.

    Cronbach, L. J. (1980). Toward reform of program evaluation (1st ed.). San Francisco, CA: Jossey-Bass.

    Cronbach, L. J. (1982). Designing evaluations of educational and social programs (1st ed.). San Francisco, CA: Jossey-Bass.

    Epstein, R. M. (1999). Mindful practice. Journal of the American Medical Association282(9), 833–839.

    Epstein, R. M. (2003). Mindful practice in action (I): Technical competence, evidence-based medicine, and relationship-centered care. Families, Systems & Health21(1), 1–9.

    Epstein, R. M., Siegel, D. J., & Silberman, J. (2008). Self-monitoring in clinical practice: A challenge for medical educators. Journal of Continuing Education in the Health Professions28(1), 5–13.

    Eraut, M. (1994). Developing professional knowledge and competence. Washington, DC: Falmer Press.

    Fish, D., & Coles, C. (1998). Developing professional judgement in health care: Learning through the critical appreciation of practice. Boston, MA: Butterworth-Heinemann.

    Ford, R., Gyarmati, D., Foley, K., Tattrie, D., & Jimenez, L. (2003). Can work incentives pay for themselves? Final report on the Self-Sufficiency Project for welfare applicants. Ottawa, Ontario, Canada: Social Research and Demonstration Corporation.

    Garvin, D. A. (1993). Building a learning organization. Harvard Business Review71(4), 78–90.

    Ghere, G., King, J. A., Stevahn, L., & Minnema, J. (2006). A professional development unit for reflecting on program evaluator competencies. American Journal of Evaluation27(1), 108–123.

    Gibbins, M., & Mason, A. K. (1988). Professional judgment in financial reporting. Toronto, Ontario, Canada: Canadian Institute of Chartered Accountants.

    Gustafson, P. (2003). How random must random assignment be in random assignment experiments? Ottawa, Ontario, Canada: Social Research and Demonstration Corporation.

    Henry, G. T., & Mark, M. M. (2003). Toward an agenda for research on evaluation. New Directions for Evaluation, 97, 69–80.

    Higgins, J., & Green, S. (Eds.). (2011). Cochrane handbook for systematic reviews of interventions: Version 5.0.2 (updated March 2011). The Cochrane Collaboration 2011. Retrieved from www.cochrane-handbook.org

    House, E. R., & Howe, K. R. (1999). Values in evaluation and social research. Thousand Oaks, CA: Sage.

    Human Resources Development Canada. (1998). Quasi-experimental evaluation (Publication No. SP-AH053E-01–98). Ottawa, Ontario, Canada: Evaluation and Data Development Branch.

    Hurteau, M., Houle, S., & Mongiat, S. (2009). How legitimate and justified are judgments in program evaluation? Evaluation15(3), 307–319.

    Jewiss, J., & Clark-Keefe, K. (2007). On a personal note: Practical pedagogical activities to foster the development of “reflective practitioners.” American Journal of Evaluation28(3), 334–347.

    Katz, J. (1988). Why doctors don’t disclose uncertainty. In J. Dowie & A. S. Elstein (Eds.), Professional judgment: A reader in clinical decision making (pp. 544–565). Cambridge, MA: Cambridge University Press.

    Kelling, G. L. (1974a). The Kansas City preventive patrol experiment: A summary report. Washington, DC: Police Foundation.

    Kelling, G. L. (1974b). The Kansas City preventive patrol experiment: A technical report. Washington, DC: Police Foundation.

    King, J. A., Stevahn, L., Ghere, G., & Minnema, J. (2001). Toward a taxonomy of essential evaluator competencies. American Journal of Evaluation22(2), 229–247.

    Kitchener, K. S. (1984). Intuition, critical evaluation and ethical principles: The foundation for ethical decisions in counseling psychology. The Counseling Psychologist12(3), 43–55.

    Krasner, M. S., Epstein, R. M., Beckman, H., Suchman, A. L., Chapman, B., Mooney, C. J., & Quill, T. E. (2009). Association of an educational program in mindful communication with burnout, empathy, and attitudes among primary care physicians. Journal of the American Medical Association302(12), 1284–1293.

    Kuhn, T. S. (1962). The structure of scientific revolutions. Chicago, IL: University of Chicago Press.

    Kundin, D. M. (2010). A conceptual framework for how evaluators make everyday practice decisions. American Journal of Evaluation31(3), 347–362.

    Larson, R. C. (1982). Critiquing critiques: Another word on the Kansas City preventive patrol experiment. Evaluation Review6(2), 285–293.

    Levin, H. M., & McEwan, P. J. (Eds.). (2001). Cost-effectiveness analysis: Methods and applications (2nd ed.). Thousand Oaks, CA: Sage.

    Mabry, L. (1997). Ethical landmines in program evaluation. In R. E. Stakes (Chair), Grounds for turning down a handsome evaluation contract. Symposium conducted at the meeting of the AERA, Chicago, IL.

    Mark, M. M., Henry, G. T., & Julnes, G. (2000). Evaluation: An integrated framework for understanding, guiding, and improving policies and programs (1st ed.). San Francisco, CA: Jossey-Bass.

    Mason, J. (2002). Qualitative researching (2nd ed.). Thousand Oaks, CA: Sage.

    Mayne, J. (2008). Building an evaluative culture for effective evaluation and results management. Retrieved from http://www.cgiar-ilac.org/files/publications/briefs/ILAC_Brief20_Evaluative_Culture.pdf

    Modarresi, S., Newman, D. L., & Abolafia, M. Y. (2001). Academic evaluators versus practitioners: Alternative experiences of professionalism. Evaluation and Program Planning24(1), 1–11.

    Morris, M. (1998). Ethical challenges. American Journal of Evaluation19(3), 381–382.

    Morris, M. (Ed.). (2008). Evaluation ethics for best practice: Cases and commentaries. New York: Guilford Press.

    Morris, M. (2011). The good, the bad, and the evaluator: 25 years of AJE ethics. American Journal of Evaluation32(1), 134–151.

    Mowen, J. C. (1993). Judgment calls: High-stakes decisions in a risky world. New York: Simon & Schuster.

    Newman, D. L., & Brown, R. D. (1996). Applied ethics for program evaluation. Thousand Oaks, CA: Sage.

    No Child Left Behind Act of 2001, Pub. L. No. 107-110, 115 Stat. 1425.

    Office of Management and Budget. (2004). What constitutes strong evidence of a program’s effectiveness? Retrieved from http://www.whitehouse.gov/omb/part/2004_program_eval.pdf

    Patton, M. Q. (1997). Utilization-focused evaluation: The new century text (3rd ed.). Thousand Oaks, CA: Sage.

    Patton, M. Q. (2008). Utilization-focused evaluation (4th ed.) Thousand Oaks, CA: Sage.

    Pawson, R., & Tilley, N. (1997). Realistic evaluation. Thousand Oaks, CA: Sage.

    Picciotto, R. (2011). The logic of evaluation professionalism. Evaluation17(2), 165–180.

    Polanyi, M. (1958). Personal knowledge: Towards a post-critical philosophy. Chicago, IL: University of Chicago Press.

    Polanyi, M., & Grene, M. G. (1969). Knowing and being: Essays. Chicago, IL: University of Chicago Press.

    Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach. Thousand Oaks, CA: Sage.

    Sanders, J. R. (1994). Publisher description for the program evaluation standards: How to assess evaluations of educational programs. Retrieved from http://catdir.loc.gov/catdir/enhancements/fy0655/94001178-d.html

    Schön, D. A. (1987). Educating the reflective practitioner: Toward a new design for teaching and learning in the professions (1st ed.). San Francisco, CA: Jossey-Bass.

    Schön, D. A. (1988). From technical rationality to reflection-in-action. In J. Dowie & A. S. Elstein (Eds.), Professional judgment: A reader in clinical decision making (pp. 60–77). New York: Cambridge University Press.

    Schwandt, T. A. (2000). Three epistemological stances for qualitative enquiry. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 189–213). Thousand Oaks, CA: Sage.

    Schwandt, T. A. (2007). Expanding the conversation on evaluation ethics. Evaluation and Program Planning30(4), 400–403.

    Schwandt, T. A. (2008). The relevance of practical knowledge traditions to evaluation practice. In N. L. Smith & P. R. Brandon (Eds.), Fundamental issues in evaluation (pp. 29–40). New York: Guilford Press.

    Schwandt, T. A., & Dahler-Larsen, P. (2006). When evaluation meets the “rough ground” in communities. Evaluation12(4), 496–505.

    Schweigert, F. J. (2007). The priority of justice: A framework approach to ethics in program evaluation. Evaluation and Program Planning30(4), 394–399.

    Scriven, M. (1994). The final synthesis. Evaluation Practice15(3), 367–382.

    Scriven, M. (2004). Causation. Unpublished manuscript, University of Auckland, Auckland, New Zealand.

    Scriven, M. (2008). A summative evaluation of RCT methodology & an alternative approach to causal research. Journal of Multidisciplinary Evaluation5(9), 11–24.

    Seiber, J. (2009). Planning ethically responsible research. In L. Bickman & D. Rog (Eds.), The Sage handbook of applied social research methods (2nd ed., pp. 106–142). Thousand Oaks, CA: Sage.

    Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.

    Simons, H. (2006). Ethics in evaluation. In I. Shaw, J. Greene, & M. M. Mark (Eds.), The Sage handbook of evaluation (pp. 243–265). Thousand Oaks, CA: Sage.

    Skolits, G. J., Morrow, J. A., & Burr, E. M. (2009). Reconceptualizing evaluator roles. American Journal of Evaluation30(3), 275–295.

    Smith, M. L. (1994). Qualitative plus/versus quantitative: The last word. New Directions for Program Evaluation, 61, 37–44.

    Smith, N. L. (1998). Professional reasons for declining an evaluation contract. American Journal of Evaluation19(2), 177–190.

    Smith, N. L. (2007). Empowerment evaluation as evaluation ideology. American Journal of Evaluation28(2), 169–178.

    Stake, R., & Mabry, L. (1998). Ethics in program evaluation. Scandinavian Journal of Social Welfare7(2), 99–109.

    Stevahn, L., King, J. A., Ghere, G., & Minnema, J. (2005a). Establishing essential competencies for program evaluators. American Journal of Evaluation26(1), 43–59.

    Stevahn, L., King, J. A., Ghere, G., & Minnema, J. (2005b). Evaluator competencies in university-based training programs. Canadian Journal of Program Evaluation20(2), 101–123.

    Tripp, D. (1993). Critical incidents in teaching: Developing professional judgement. London, England: Routledge.

    U.S. Department of Health and Human Services. (2009). Code of Federal Regulations—Title 45: Public Welfare; Part 46: Protection of Human Subjects. Revised January 15, 2009: Effective July 14,

    2009.

    Weiss, C. H. (1998). Evaluation: Methods for studying programs and policies (2nd ed.). Upper Saddle River, NJ: Prentice Hall.

    Yarbrough, D., Shulha, L., Hopson, R., & Caruthers, F. (2011). Joint Committee on Standards for Educational Evaluation: A guide for evaluators and evaluation users (3rd ed.). Los Angeles, CA: Sage.

Describe which theory you believe best describes intelligence;

In your essay this week, address the following:

  • Provide a summary of the different theories of intelligence proposed in the textbook (Spearman, Sternberg, Gardner, and Salovey/Mayer’s theories);
  • Describe which theory you believe best describes intelligence;
  • Explain why you feel this way;
  • Describe whether or not you think that this type of intelligence can be assessed with a simple test. Why or why not?

Be sure to use the information presented in Chapter 7 on Intelligence to substantiate your claims and to show understanding of the readings for the week.

Your essay should be at least 500 words in length and should be presented in APA format, including a title page, in-text citations, a running header, page numbers, double spacing, and a reference page. Your assignment should use terms/references directly from the chapter, and all extra outside research must be properly cited.

 

Assignment 1: Effects Of Child Sexual Abuse

When Jamie’s mother, Arlene, attended the first-term Parent Teacher Association meeting, Carole, another parent who works as a case manager at a local community mental-health clinic, helped seat her. Over coffee after the PTA meeting, Arlene disclosed her concerns about the changes in Jamie. A little over a year ago, Jamie was kidnapped and sexually victimized over a 2-day period, after which he managed to escape. Arlene explained how Jamie’s life changed since that time: his grades dropped substantially, he started eating his meals alone at school, he stopped interacting with other children, and he has become a loner with no close friends.

After completing her narration of Jamie’s current condition, Arlene asked Carole for advice on his condition. She was not sure whether this is a phase. Carole promised to bring Arlene some literature on the developmental and psychological effects on a child who has been sexually victimized.

What would this literature indicate about the psychological and developmental effects on a child who has been sexually victimized?
What intervention strategies might Carole recommend to Arlene?

There can be many negative effects of sexual abuse, especially in children.

Evaluate whether the resource must be excluded from a course if there are no reasonably equivalent accessible alternatives.

Educators strive to create a classroom that instills creativity and innovation. In this discussion, you will think about the creative and innovative instructional approach known as the flipped classroom while making direct connections to the Common Core State Standards and teacher decision making based on student assessments. Reflecting on your previous discussion on CCSS in Week Two as well as your previous discussions from EDU671: Fundamentals of Educational Research about the flipped classroom, you will complete the three parts of this discussion’s initial post.

There are three parts to this discussion, which are described below.

Part 1

  • Discuss how the flipped classroom idea can be used in conjunction with CCSS (Math or English Language Arts)
  • Describe ways you could incorporate technology used in the flipped classroom idea to support the Framework for 21st Century Learning in the classroom as it relates to decision making based on student assessments.

Part 2
Now, think about assessments you have created or used in the past to address the following:

  • Discuss if a school or teacher should use a multimedia resource that is absolutely amazing in delivering both content and assessment, but is not accessible.
  • Evaluate whether the resource must be excluded from a course if there are no reasonably equivalent accessible alternatives.

Part 3

  • Attach a link to your electronic portfolio.
  • In one paragraph, reflect on your experience with the redesign in terms of challenges you encountered during the Week Two Assignment and how you overcame those challenges including any difficulties experienced in revising to meet the components of one ISTE-S standard and the CCSS (Math or English Language Arts) which are aligned with a minimum of one core subject and 21st century themes and a minimum of one learning and innovation skill, one information, media, and technology skill, and evidence of at least one life and career skill.

Guided Response: Respond to at least two peers. Your replies should include a question about the incorporation of CCSS and the Framework for 21st Century learning in your peers’ posts and should offer an additional resource for consideration that supports an alternative viewpoint. Though two replies is the basic expectation, for deeper engagement and learning, you are encouraged to provide responses to any comments or questions others have given to you, including the instructor. Responding to the replies given to you will further the conversation and provide additional opportunities for you to demonstrate your content expertise, critical thinking, and real world experiences with this topic.

Carefully review the Discussion Forum Grading Rubric for the criteria that will be used to evaluate this Discussion Thread.