Categories of Program Evaluation
Program evaluators conduct research on social programs (or any intervention) to assess their impact and identify how they can be improved. Evaluation methods are varied and must be tailored to the specific needs and resources of the program stakeholders. There are some general principles that can guide program evaluation questions and methods, which I will describe here. Most often evaluations are conducted for “program improvement, accountability, or knowledge generation…” (Rossi et al., p. 11).
Evaluation activities can serve two broad purposes: “formative” and “summative” (Alkin & Vo, p. 12).
Formative Evaluation
Evaluations are formative when they are conducted early in program development and implementation (Alkin & Vo, p. 12; Donaldson, p. 57). Programs that are in the initial stages of implementation and development will run into problems that require adjustment (Royse et al., p. 125). The purpose of formative evaluation is to conduct research which identifies what is working and what is not so that program decisionmakers can improve the program (Rossi et al., p. 11).
“Formative evaluation produces information that is fed back during the development of a curriculum to help improve it. It serves the needs of developers” (Weiss, p. 31)
Summative Evaluation
“The findings of summative evaluations are usually intended for decision makers with major roles in program oversight, for example, the funding agency, governing board, legislative committee, political decision [p. 11] makers, or organizational leaders. Such evaluations may influence significant decisions about the continuation of the program, allocation of resources, restructuring, or legislative action [p. 12]” (Rossi et al., p. 12).
“Summative evaluation is done after the curriculum is finished. It provides information about the effectiveness of the curriculum to school decision makers who are considering adopting it in their schools” (Weiss, p. 31).
“Summative evaluation is meant for decisions about whether to continue or end a program, extend it to other locations or cut it back” (Weiss, p. 31)
“In practice, evaluation is most often called on to help with decisions about [p. 31] improving programs, projects, and components. Go/no-go, live-or-die decisions about programs are relatively rare (Kaufman, 1976) [p. 32]” (Weiss, p. 31-32).
Program evaluators spend much of their time conducting formative evaluation (Alkin & Vo, p. 13).
Many evaluations have a formative and summative component, what Marvin Alkin and Anne Vo (2018) call “summary formative evaluation” (p. 13), where evaluation data is used to modify the program but eventually the results of the program are summarized.
Process Evaluation:
Programs can find process evaluations useful at any stage of program or project implementation (Royse et al., p. 129).
Carol Weiss (1998) writes that some process evaluation activities may monitor indicators such as “participant enrollment, activities offered, actions taken, staff practices, and client actions” (p. 32). Process evaluations can be used to describe the program and its activities, monitor program activities and outcomes, and determine the quality of program implementation (Royse et al., p. 130).
“To be effective in bringing about the desired improvements in social conditions, a program needs more than a good design. The program staff also must implement its design; that is, it must actually carry out its intended functions in the intended way…The result can easily be substantial discrepancies between the program as intended and the program as actually implemented” (Rossi et al., p. 91).
“Process evaluation examines what a program is, the activities undertaken, who receives services or other benefits, and the consistency with which it is implemented in terms of its design and across sites. Often it is undertaken for formative or program improvement purposes: It can directly point to deficiencies in the ongoing operations of a program that may be remedied by its administrators” (Rossi, p. 92)
Outcome or Impact Evaluation
Impact evaluations may be categorized in “efficacy evaluation and effectiveness evaluation” (Rossi et al., p. 190).
“Unlike efficacy evaluations where the evaluators typically have much more control, determine the evaluation questions to pursue, and select the evaluation methods and design, effectiveness evaluations are highly dynamic and unpredictable” (Donaldson, Theory, p. 46)
“Assessments of effectiveness, in contrast, are oriented toward estimating the intervention effects for a fully deployed program implemented at scale and delivered as routine practice to typical members of the target population… Their purpose is to determine if the program has beneficial effects when implemented under real-world conditions of workaday practice” (Rossi et al., p. 190)
“Outcome evaluations put the emphasis on what happens to clients after their participation in the program as a result of the intervention” (Weiss, p. 32).
Performance Monitoring
“Process monitoring is the systematic, periodic documentation of key aspects of program performance that assesses whether [p. 92] the program is operating as intended or according to some appropriate standard. By parallel construction, outcome monitoring is the periodic measurement of the outcomes of interest to the program on the program participants [p. 93]” (Rossi et al., p. 92-93)
These broad categories do not include the many methods and schools of thought that can be used to conduct program evaluations. Some evaluation practitioners prefer to focus on ensuring that the results of the evaluation are useful to key decision makers (Utilization-Focused Evaluation), some practitioners use …
References
Marvin Alkin & Anne Vo (2018). Evaluation Essentials: From A to Z (2nd ed.). The Guilford Press.
Darleen Russ-Eft & Hallie Preskill (2009). Evaluation in Organizations: A Systematic Approach to Enhancing Learning, Performance, and Change (2nd ed.). Basic Books.
David Royse, Bruce Thyer, & Deborah Padgett (2016). Program Evaluation: An Introduction to an Evidence-Based Approach (6th ed.). Cengage Learning.