top of page
  • Marty Zimmerman

Managing Up: Encouraging Program Evaluation When It’s Not “Your Job”

Updated: May 30

Comic about good impact reporting

It makes sense: a potential funder that you’re asking to invest in your mission not only wants to know what your organization does, how it does it, and the theory or logic behind why what your programs do should successfully help people (or animals, or the environment) – they’re also going to want quantifiable proof that your organization actually does help!

That’s where evaluation and outcomes measurement come in.

But what if you’re finding that the types of outcomes or measurable results funders are asking for isn’t quite the information your organization currently collects? Maybe your organization has lots of data on outputs but not actual outcomes? How, as the development person, do you help your organization create and implement evaluation systems that support strong grant proposals (not to mention being able to report results to funders once a grant is already secured) with compelling quantifiable evidence that what you’re doing is working and your mission is being achieved? When it comes to collecting information on real results, it’s almost always a matter of “easier said than done,” especially at smaller organizations where there is often no staff person dedicated to or with extensive experience in evaluation.

As a development professional, you may be in a unique position to understand the types of results-based data foundations are interested in. This data collection can also extend more generally to help your organization explain the good it does (i.e. in marketing materials, in wooing individual donors, volunteers or other types of supporters, etc.). But speaking with colleagues and especially program staff about the need for better or different data collection can be intimidating; you know that in broaching the topic, co-workers may just hear “I need you to do more paperwork,” which will likely not be met with much enthusiasm.

If you feel you have a good understanding of both the “what” and “how” of measuring the outcomes information you need – maybe it’s implementing a standardized evaluation tool, or administering pre/post surveys (or even just tweaking the information already being collected on surveys) to program participants – it may fall on you to help create these systems to make your own life easier in the long run. There’s a couple things you can do to make it sound less scary or onerous to your coworkers. First, lay as much of the groundwork as you can on what implementation of these new data collection procedures would look like and simplifying it as much as possible before presenting it to colleagues or program staff. Next, you want to present the material in a way that can actually build excitement for what you’re trying to accomplish BEFORE recommending the new procedures; after all, those who are passionate about doing this work stand to feel a lot of gratification in seeing the quantifiable proof of the good they’re doing and how they are helping. These steps can help create the buy-in needed to successfully institute new procedures for collecting data. Of course, touting the benefit that good outcomes data = stronger grant proposals = more money can also be a strong selling point!

If you don’t feel that you have the expertise to assist your organization in developing stronger, more grants-aligned outcomes measurement and evaluation systems – especially for programs or organizations whose mission or goals are more complicated to quantify – then your task may be to convince the powers that be that contracting with an external evaluator to help develop and implement these systems may be needed. Again, explaining it as an investment using the formula above (good outcomes data = stronger grant proposals = more money) will help, though it can still be a tough sell if there’s just no room in the budget to hire an evaluator. Fortunately, there are some funders who recognize that many organizations find themselves in this Catch 22 situation and offer technical assistance funding that can be used for the expenses related to developing and instituting evaluation systems, including hiring an expert. Many of the larger Colorado foundations such as The Denver Foundation, Rose Community Foundation, Colorado Health Foundation, and the Daniels Fund offer grant opportunities for funding technical assistance including for evaluation activities. Some foundations provide these award opportunities for evaluation as stand-alone grants and some as a built-in component of a larger programmatic ask. Also, each foundation has different parameters for how and to what kinds of organizations they will grant this funding to, so, as always, you’ll want to do your research, talk to a program officer, and cultivate a relationship before applying for this type of grant.

Have you ever had to write a grant proposal or report to a funder and didn’t have quite the right outcomes data available? How did you work with your colleagues to get the necessary data? We’d love to hear from you!


bottom of page