Addresses the concerns of the stakeholders that you identified in your Stakeholder Analysis
Program Evaluation Studies
TK Logan and David Royse
A variety of programs have been developed to address social problems such as drug addiction, homelessness, child abuse, domestic violence, illiteracy, and poverty. The goals of these programs may include directly addressing the problem origin or moderating the effects of these problems on indi- viduals, families, and communities. Sometimes programs are developed
to prevent something from happening such as drug use, sexual assault, or crime. These kinds of problems and programs to help people are often what allracts many
social workers to the profession; we want to be part of the mechanism through which society provides assistance to those most in need. Despite low wages, bureaucratic red tape, and routinely uncooperative clients, we tirelessly provide services tha t are invaluable but also at various Limes may be or become insufficient or inappropriate. But without conducting eva luation, we do not know whether our programs are helping or hurting, that is, whether they only postpone the hunt for real solutions or truly construct new futures for our clients. This chapter provides an overview of program evaluation in gen – eral and outlines the primary considerations in designing program evaluations.
Evaluation can be done informally or formally. We are constantly, as consumers, infor- mally evaluating products, services, and in formation. For example, we may choose not to return to a store or an agency again if we did not evaluate the experience as pleasant. Similarl y, we may mentally take note of unsolicited comments or anecdotes from clients and draw conclusions about a program. Anecdotal and informal approaches such as these gen- erally are not regarded as carrying scientific credibility. One reason is that decision biases play a role in our “informal” evaluation. Specifically, vivid memories or strongly negative or positive anecdotes will be overrepresented in our summaries of how things are evaluated. This is why objective data are necessary to truly understand what is or is not working.
By contrast, formal evaluations systematically examine data from and about programs and their outcomes so that better decisions can be made about the interventions designed to address the related social problem. Thus, program evaluation involves the usc of social research meLhodologies to appraise and improve the ways in which human services, poli- ci~s, and programs are co nducted. Formal eva l.uation, by its very nature, is applied research.
Formal program evaluations attempt to answer the following general ques tion: Does the p rogram work? Program evaluation may also address questions such as the following: Do our clients get better? How does our success rate compare to those of other programs or agencies? Can the same level of success be obtained through less expensive means?
221
222 PART II • QUANTITATIVE A PPROACHES: TYPES OF STUD IES
What is the experience o f the typical client? Sho uld this prog ram be terminated and its funds applied elsewhere?
Ideally, a tho rough program eval uation would address more complex questions in three main areas: (1) Does the program produce the intended outcomes and avoid unin- tended negative o u tcomes? (2) For whom does the program work best and un der what conditions? and (3) Ilow well was a p rogram model developed in one setting adapted to another setti ng?
Evaluation has taken an especially p rominent role in practi.ce today because o f the focu~ on evidence-based practice in social programs. Social work, as a profession, has been asked to use evidence-based practice as an ethical obligation (Kessler, Gira, & Poertner, 2005). Evidence-based practice is defined diLTerently, but most definit ions include using program evaluation data to help determine best practices in whatever area of social programming is being considered. In other words, evidence-based practice incl udes using objective indica- tors of success in addition to p ractice or more subjective indicators of success.
Formal program evaluations can be found on just about every topic. For instance, Fraser, Nelson, and Rivn rd ( 1997) h ave examined th e effectiveness of family preservation services; Kirby, Korpi, Adivi, and Weissman ( 1997) have evalu ated an AIDS and preg- nancy prevention middle school program. Mo rrow- Howell, Beeker-Kemppainen, and Judy ( 1998) evaluated an interven tion designed to reduce the risk of suicide in elderl y adult clients of a crisis hotline. Richter, Snider, and Gorey ( 1997) used a quasi-experimental design to study the effects of a g roup work interven tio n on female sur vivors of childho od sexual abuse. Leukefeld and colleagues ( 1998) examined the effects of an I IlV prevention intervention with injecting drug and crack users. Logan and colleagues (2004) examin ed the effects of a drug co urt in terven tion as well as the costs of drug co urt compared with t he economic benefits of the drug court progra m.
Basic Evaluation Considerations
Before beginning a program eva luntion, several issues must be initially considered. These issues are decisions 1 hat are critical in determining the evaluation methodology and goals. Although you may not have complete answers to th ese qu estions when beginning to plan a n evaluation, these ques tion s help in developing th e plan and must be answered before a n evaluation ca n be carried out. We can 1.um up these considerations with the following questions: who, what, where, when, and why.
First, who will do the evaluation? This seems like a simple question at first glance. llowever, this particular consideration has major implications for the evaluation results. P rogram evaluators ca n be categorized as being either in ternal or external. An internal evaluator is someone who is a program staff member or regular agency employee, whereas an external evaluator is a professional, on contract, hired for the specific purpose of evalu- a tion. Th ere are adva ntages nnd disa dvan tages to using either type of evaluato r. For example, the internal evaluator probably will be very familia r with the staff and the program . This may save a lot of planning time. The d isadvnn tage is that eva luatio ns com- pleted by an internal eva luator may be considered less valid by outside agencies, including the funding source. The external evaluator gene rally is thought to be less biased in terms of evaluation outcomes beca use he or she has no persona l investment in the program. One disadvantage is that an externa l evaluator frequently is viewed as an “o utsider” by the staff w ithin an agency. This may affect the amount of time necessar)’ to conduct the eva lua tion or cause problems in the overall evaluation if agency staff are reluctant to cooperate.
CHAPTER 13 • P ROGRAM E VALUATION S1 UD I ES 223
Second, what resources are available to conduct the evaluation? Hiring an outside eval- uator ca n be expensive, whi le having a staff person conduct the evaluation m ay be less expensive. So, in a sense, you may be trading credibility for less cost. In fact, each method- ological decision will have a trade-off in credibility, level of information, and resources (including time and mo ney). Also, t he amount and level of infor mation as well as the research design \ .. ciU be determined, to some e11.”1ent, by what resources are available. A comprehensive and rigorous eval uation does take significant resources.
Third, where will the information come from? If an eval uation can be done using exist- ing data, the cost will be lower than if data must be collected from numerous people such as clien ts and/or staff across m ultiple sites. So having some sense of where the data will come from is important.
Fou rth, when is the evaluation information needed? In o ther wo rds, what is the time- fra me for the evaluation? The timeframe will affect costs and design of research methods.
Fifth, why is the evaluation being conducted? Is the evaluation being conducted at the request of th e fun ding so urce? Is it being cond ucted to improve services? Is it being con- ducted to document the cost-benefit trade-off of the program? If future program funding decisions will depend on the results of the evaluation, then a lot more importance will be attache d to it than 1f a new manager simply wants to know whether clients were satisfied with services. The more that is riding on an evaluation, the more attention will be given to the methodology and the more threa tened staff ca n be, especially if they think that th e purp ose of the evaluation is to down size and trim excess employees. In other words, there arc many reasons an evaluation is being considered, and these reasons may have implica- tions for the evaluati on methodology and implemen tation.
Once the issues described above have been considered, more complex questions and trade-offs will be needed in planning the evaluation. Specifically, six ma in issues guide and shape the design of any program evaluation effort and m ust be given thoughtful and delib erate consideration.
L Defining the goal of the program evaluation
2. Un dersta ndi ng the level of infor mation needed for the program evaluation
3. Determining the methods and analysis that need to be used for the program evaluation
4. Consider in g issues that might a ri se and strategies to keep the eval uation on course
5. Developing results into a useful fo rm at for the program stakeholders
6. Providing practical and useful feedback about the program strengths and weak- nesses as well as providing infor matio n about next steps
Defining the Goal of the Program Evaluation