Our Blog RSS

The latest news and occasional commentary about what’s happening at the Foundation and around our great state. 

Rhode Island Foundation Blog

rss

We are proud to be part of the Rhode Island community! Follow our blog for the latest news and occasional commentary about what’s happening at the Foundation and around our great state.


Evaluation vs research: understanding the why and how
By / October 21, 2015 /   Loading Disqus...

Last month, we shared some common myths about evaluation as a way of setting the stage for more critical thinking. Today, we think about how to approach the overwhelming volume of available data to get to what will be most meaningful for your purposes.

We’ve probably all been asked at one time or another to fill out some survey or questionnaire that seemed endless. For researchers who are frequently analyzing a large body of knowledge to draw new conclusions about a field, a wide swath of data is often required in order to make generalizations about populations or conditions. (For an example, think about all the generalizations made about “millennials.” These often come from market research which is designed to gather information about individuals or organizations.)

Evaluation, on the other hand, is specific to a context, issue, or question. Therefore, when designing your evaluation, it is important to understand why you are gathering a particular piece of information and how you will use it.

Evaluation is making a context-specific judgment

A commonly accepted definition of evaluation states, “The purpose of evaluation is to improve, not prove.” This distinction highlights the implicit judgment involved in evaluation. Where evaluation might ask questions such as: How effective is this program at accomplishing our goals? How well did we implement this plan? How adequate is our staffing for this initiative? -- Research questions are more likely to be: Does smoking cause cancer? Are children spending too much time playing video games?

Benchmarks are judgmental, too

In designing an evaluation, we always have some objective in mind. We want to increase the usage of a public park, or help people launch their businesses, or foster greater confidence in children. The fact that we have a goal in mind suggests that we will be comparing what happens to what we wanted or expected to happen. The criteria against which we compare our end results are benchmarks and yes, the word judgment appears again here.

Benchmarks – or criteria, or standards – are signposts along the way that present opportunities to assess how well the data lines up with pre-determined, pre-identified conditions

 

“We have not failed. We have learned something. We have learned with a certainty that this method will not work, and that we will need to try another way.”  

                                         – Thomas Edison

 

Failures are only failures in context-specific ways

It is important to note here that sometimes, when things don’t go as we had expected, we want to call the effort a “failure.” At least in evaluation terms, most often, the activity was not a comprehensive failure, but rather, it failed to meet some aspect of the program objectives for which we may or may not have framed the right question.

We can think here of Thomas Edison, who is credited with assuring an associate who had been discouraged over a failed experiment by saying: “We have not failed. We have learned something. We have learned with a certainty that this method will not work, and that we will need to try another way.”

That your failure was context-specific may be of little comfort to you, but at least it does not shut down the possibility of trying a different approach.

Evaluation identifies an audience and is stakeholder-focused

An evaluation is usually undertaken to answer a specific set of questions, at a specific point in time. Where research is focused inward, on the researchers themselves so that they can synthesize, generalize, and draw conclusions, evaluation is focused outward – to report specific findings to a specific audience.

Guiding questions

We often think we want to know everything there is to know about a program, activity, or intervention. In thinking about using available resources for evaluation, it may be helpful to ask yourself: How do I intend to use this information? What decisions will I be able to make as a result of what I learn from this? You will not only decrease the volume of information you have to collect, but you will save yourself some time and resources and hopefully have what will be of most direct use to you.

Have you collected information that you did not know how to use, or did not need?
Do you have stories of “failures” that have led to spectacular success?

Email me at:


References and Resources:

Evaluation and Applied Research Methods: http://cgu.edu/pages/665.asp

Four Differences between Research and Program Evaluation: http://managementhelp.org/blogs/nonprofit-capacity-building/2012/01/08/four-differences-between-research-and-program-evaluation/

Is There a Difference between Evaluation and Research?
http://tde.sagepub.com/content/31/2/150.full
blog comments powered by Disqus

Featured Post

Read More »

Address

LocationOne Union Station
Providence, RI 02903
Directions

Contact

(401) 274-4564
info@rifoundation.org

E-News Sign Up