Our Blog RSS

The latest news and occasional commentary about what’s happening at the Foundation and around our great state. 

Rhode Island Foundation Blog

rss

We are proud to be part of the Rhode Island community! Follow our blog for the latest news and occasional commentary about what’s happening at the Foundation and around our great state.


Demystifying evaluation
By / September 1, 2015 /   Loading Disqus...
Evaluation is one of those terms that can get a bad rap. Often used interchangeably with “accountability,” “measurement,” “assessment,” and “outcomes,” the basic reasons why one might undertake evaluative activities can easily become subsumed under a shroud of potential blame or a sense of failure. Used well, and used thoughtfully, evaluation activities are simply tools to help you better understand your own work and to help arm you with information to make better, more-informed decisions. 

We all recognize that in our outcomes-driven environment, we are increasingly required to show evidence of the impact of our work – whether we are talking about indicators, metrics, data points, or logic models, we are gathering information that provides a credible picture that what we are doing and how we are doing it makes a difference in addressing some identified societal problem, issue, challenge, or in some cases, opportunity. 

At the simplest level, evaluation asks: “How do we know if what we are doing is working?”It’s important to drill down to understand the elements of that seemingly simple question. 

Who is we? 


Who is our audience for this particular evaluation? Who is interested in knowing this information and for what purpose? Is it for your program managers, to determine whether to continue a program? Is it for your board to report out after a multi-year initiative on lessons learned? Is it internal, to develop a new initiative? Evaluation is specific to context and to audience.

What is the what?


We can’t know about whether something is working if we have not fully defined the what. Take the time to fully articulate your assumptions about why you believe your program will make a difference, how the program should be working, and what parameters constrain or support your initiative. Is it a onetime activity? A clear, concise but specific description of the intervention is a necessary pre-requisite to asking the most useful evaluation questions.

What does “working” mean?


What will this program or activity look like when it is working? It’s important to be clear about how you have defined what it would mean to be working. It may also be important to know what it will look like if it’s working exceptionally well, or simply adequately. Is there a minimum threshold that you would require to consider something working? 

As you think about how to deepen your own evaluation efforts, I’d like to try to dispel a few of the more common evaluation myths. 

Myth #1: Evaluation is scientific and can only be done by experts. 

Evaluation is systematic and requires clarity and consistency, but it can be done by anyone, and in fact, is likely to incorporate elements – questions and assessments – that staff is already using in informal ways. 

Here is a working definition provided by the Innovation Network: Evaluation is the systematic collection of information about a program or activity that enables stakeholders to better understand the program, improve its effectiveness, and/or make decisions about future programming.

Myth #2: Quantitative data is better than qualitative data.

Although quantitative data can certainly be an important part of evaluation, some types of information are better communicated through qualitative data. 

Quantitative and qualitative data are complementary and often work best when used together.  Once you have decided what questions you are trying to answer and for whom, you will be able to determine the kind of information that you need. 

Myth #3: The more information you track and collect, the better your evaluation will be. 

If you don’t know how you will use the information you gather, it is not of use to you, and collecting it is likely to drain current resources. 

Research is not the same as evaluation. Evaluation is driven by decision-making. It is specific to context and to stakeholders. 

Myth #4: Evaluation is too complicated / takes too long.

Evaluation does not have to be complicated or terribly time-consuming. In some situations, there may only be one or two questions that you are actively trying to answer. Once you have identified the who, what, and thresholds for success, evaluation can be built into the implementation of the program. 

Building and sustaining an organizational culture that integrates evaluation questions into planning can take some time, certainly, but think of how costly and time-consuming it would be to continue programs or activities that are not serving their intended purposes. 

Myth #5: Evaluation is threatening to my programs.

Conducted thoughtfully and with appropriate stakeholder involvement, evaluation supports program improvement, not elimination. Evaluation is less about proving the merits of a particular program than it is about improving the theory, design, and implementation of your interventions. 

What are the evaluation myths that exist in your organization? 

Questions? Comments? Email me at

References and Resources:

Please note the following links are for reference only. By providing them, the Rhode Island Foundation is not formally endorsing any particular site or organization. 

Getting Started with Evaluation, Third Sector New England
Evaluation 101 Worksheet, The Pell Institute
Evaluation 101, Evaluation Springboard
Kellogg Foundation Handbook, The W.K. Kellogg Foundation 
Common Myths and Misconceptions about Evaluation, Kautilya Society
Dispelling Evaluation Myths, Innovation Network.


blog comments powered by Disqus

Featured Post

Read More »

Address

LocationOne Union Station
Providence, RI 02903
Directions

Contact

(401) 274-4564
info@rifoundation.org

E-News Sign Up