A. The Situation and its Challenges
Most nonprofit agencies receive funding from multiple sources, including:
- Federal grants and/or contracts;
- State grants and/or contracts;
- Local government grants or contracts;
- Local foundation and United Way grants;
- National foundation grants
Each of these sources has reporting requirements. With the growing concerns about accountability, government and foundation funders are now requiring more outcome evaluation reporting from grantees and contractors. Some funders simply ask that agencies report using data they already collect, however that is becoming increasingly rare. More funders today identify specific outcomes they require. This is especially true for government funding sources in health and human services and for children and youth programs. In the case of many health and human services grants, evaluation criteria are tied to practice standards set by accreditation agencies. As a result of the growing emphasis on outcomes, many nonprofit agencies are now faced with a plethora of different outcome evaluation criteria.
B. Challenges with Prescriptive Outcomes
Prescriptive outcomes are usually developed by the funding agency often apart from any dialogue with other, related funders. This is especially true with government funding for grants and contracts on the state and local level. Grantmakers at the national level have a fair amount of dialogue. So, for example, a local funder that is active in Grantmakers in Health can share evaluation activities and benefit from the national dialogue.
In many cases, different local funding sources create outcome requirements in a silo. Agencies receiving funding have multiple, overlapping evaluation requirements. Outcomes are mandated by separate funding agencies within the same Department of Health or Department of Children’s Services. As a result, the agency must deal with fragmented, overlapping outcome and data requirements. In certain counties in a Southeastern state, the funder mandated specific outcomes and provided the database to be used for reporting. That database was a “closed system” and could not import or export data. Other behavioral health programs provided closed architecture databases.
One seasoned Executive Director said that their agency had staff coming in on Saturdays (unpaid) to handle the additional data entry required by multiple databases. Staff could not easily import and export data between the many different systems. So they re-entered data multiple times. The Executive Director as ‘on the verge,’ as tearfully said: “We just can’t take much more of this. There’s no more we can give.”
C. Outcomes and the Database
In an ideal world, nonprofit agencies would develop an outcome framework that is driven by the programs. Funder requirements would fit within that coherent outcome framework. Agencies would collect outcome indicator data to measure progress on outcomes. The database should be used as a program management tool to measure and track progress first for the agency and its programs, and also for funders. However, when outcome requirements are mandated in silo fashion by different funders, it is much more difficult to create the integrated outcome framework and measure progress in a cohesive fashion.
However, many agencies can create an integrated framework that is program driven rather than funder driven — even with multi-funder requirements. However, developing the integrated outcome framework often takes the help of a consultant or seasoned peer..
D. Agency Responses
Agencies have responded to these outcome evaluation challenges in a number of ways. When resources are available, agencies can develop an integrated outcome framework and database that are linked to program goals. In many cases, agencies try to accommodate, but find themselves swamped by the minutia of reporting requirements – and remain caught in a fragmented outcome and data system.
Developing an integrated outcome evaluation system requires that agencies:
- Understand the benefits of an integrated outcome and data system as a program management tool;
- Have an internal commitment to outcome evaluation at the program level;
- Have databases that can be adapted;
- Can access technical assistance to develop their evaluation system;
- Are able to upgrade computer systems and develop an effective database;
- Receive ongoing funding and technical assistance for evaluation work.
Larger agencies often have the capacity needed to develop their outcome framework and database into an integrated evaluation data system. However, smaller and mid-sized agencies often find themselves on the wrong side of the digital divide – falling increasingly behind in a data-driven evaluation environment.
E. Building the Outcome System
Here are some of the steps I have used with agencies to build, or retrofit, an evaluation system using already existing grant and contract requirements.
- Outline program goals and outcomes.
- Discuss and list all outcome reporting requirements (indicators and data measures).
- Discuss any accreditation measures or field standards to include.
- Create an outcome measurement framework that is simple and practical.
- “Fill in” any indicators and data measures already required by funding sources.
- Add any final indicators or data measure needed not already being reported.
- Review and revise the outcome measurement framework.
- Collect data for key areas from program managers.
- Analyze and consolidate data and make reports.
- Use reports for ongoing program improvement (PI).