Program Evaluation: Gathering Good Data to Increase Positive Impacts
A community group wants to improve its food distribution program and seeks to understand how best to increase the accessibility of food that has high nutritional value and meets varied cultural standards of the community’s residents.
A multi-service, nonprofit agency provides assistance to people across the life span. It asks: What can we do to increase the numbers of people who find our services helpful? Why do older people seem to use us more than younger people?
A museum seeks to enrich the lives of children in a neighborhood whose residents do not know about the museum’s services.
A foundation has a mission to provide funding for housing and wants to know how to best direct the funding toward the types of housing that fit the needs and preferences of people in the area the foundation serves.
All of these organizations want to learn whether they accomplish their intended purpose. In the simplest form, they want to know: “If we do A, does it lead to B?”
Program evaluation is a systematic process for an organization to obtain information on its activities, its impacts, and the effectiveness of its work, so that it can improve its activities and describe its accomplishments.
Gathering solid evidence about effectiveness
A useful program evaluation typically builds upon four types of information:
- Client/participant characteristics - e.g., demographics
- Service data - type and amount of services, activities, treatments, etc.
- Documentation of results or outcomes – evidence of changes that occurred or needs met among the people served
- Perceptions about services – how people feel about their experience with the organization
The organization might have such information available, or it may need to develop the means to acquire that information. Ideally, the organization will have, or will create, a process for obtaining information in a complete and accurate form. Having information on clients and the services they receive requires a consistent method of tabulating service use, along with a database that stores that information for retrieval. Understanding outcomes and client satisfaction requires collection of authentic information from and about the people served by a program.
That’s where program evaluation enters the picture. Techniques for gathering information enable programs who want solid evidence about their effectiveness to gather it in reliable ways.
Logic models: Identifying the path to impact
Evaluation can, and should, stretch the limits of our thinking. Many years ago, after a couple of initial, and seemingly productive, meetings with a program that wanted to initiate an evaluation, they canceled an upcoming meeting. They did not return my phone calls (before the days of email!). After about a month, their director called and said, “Paul, we’re sorry we didn’t get back to you. We had an issue with ourselves, not with you. When you asked us to outline how we work and how we produce our impact, those questions raised existential concerns for us. We realized that we really did not have a solid process for working together; we realized that we really didn’t have a system to collaborate as a team to know what to do when in order to produce the best outcomes for our clients.”
Encouraging that organization to identify their program theory of change (e.g., a logic model) led to a pivotal moment – a juncture where they suddenly had new insight and, in a relatively short period of time, developed a new way to work together. Over the years, many organizations have told Wilder Research staff that the most useful part of the evaluation process is the development of a logic model. It clarifies what programs expect to accomplish. It offers a guide that enables staff to better plan and make decisions to improve the accessibility and effectiveness of their services. It assists in communicating about impact with many different kinds of stakeholders.
Showing progress and where we need to do better
Beyond helping us to improve our work, evaluation provides much other value. Staff and volunteers can feel a sense of pride and feel increased motivation by seeing the number of people served by their organization. They can feel empowered by knowing the number of people who benefited as expected from the services they received, and the number who did not. Organizations can share such information with other organizations in their field to compare notes and jointly discuss how they might improve effectiveness. Organizations can share the information with funders, to document what the funders’ resources supported and to point toward areas of additional need.
The single biggest mistake I’ve witnessed regarding evaluation is when people look at evaluation information as if it represents the score at the end of a sporting event – assuming that it shows whether we “won” or “lost.” Viewing evaluation this way makes people fearful, and it suppresses innovation.
One of the most successful fund-raising institutions in the United States – St. Jude Children’s Research Hospital – has often published statistics on its success rates for treatment of different types of cancer. Over several decades, some success rates began extremely low – even close to zero. Did that mean “give up”? Did that mean “hide the facts”? Absolutely not. In fact, the opposite. It served to rally people around a cause, and it offered a baseline against which to judge new approaches. Acknowledging the difficulty of achieving success increased determination to find cures; it strengthened appeals for funding and for other resources to improve the health of children.
So too with other types of issues that nonprofit, community-oriented organizations face. Some social issues might seem intractable. Adapting organizational activities to fit changing needs and new populations might seem bewildering. That does not mean “game over, we lost.” It means, whatever our level of effectiveness – high, medium, or low – let’s build on that to strengthen our work and increase our impact.
Evaluation comprises part of an ongoing cycle of using information to design and deliver services, gathering more information to see how well the services achieve intended outcomes, and then using that new information to make revisions and adjustments for improvement. Evaluation creates a platform for evidence-based decision-making: We can never have total certainty that what we do will be effective, but with good data, we increase the probability that we will create positive impacts.
The development of programs and policies that benefit people involves a constant search. We seriously err if we think we know it all, or if we remain rigid in our thinking and our approaches to helping people, solving problems, and addressing issues. Through program evaluation, we ask questions, seek better paths, and make progress with humility to create a better world.
The Manager's Guide to Program Evaluation: 2nd Edition: Planning, Contracting, & Managing for Useful Results is now available from Turner Publishing. This revised edition is an invaluable resource with brand new real-world examples taken from recent evaluation research projects conducted by Wilder Research staff.
This post originally appeared on The Executive Summary.