Editor’s Note: Following Public Art Network Council Member Sioux Trujillo’s post, project partner Kaity Nicastri describes the benefit of using logic models in evaluation.
Evaluation. That’s a hefty word. Most people cringe when they think of evaluation, but it’s really not that scary and doesn’t need to be feared.
With the arts in mind, evaluation can take on many forms—it can be programmatic, project-based, user/patron feedback, monitoring sales/attendance, but they all have a unifying theme: understanding the impact of your work.
I started working with a community public art program over two years ago as a Master’s-level intern from the University of Michigan’s Community Based Initiative. With a concentration in policy and evaluation, I fit the nerdier side of social work. I’m not your average caseworker.
In my new role, I was faced with a program that had surveys, but no real evaluation and no understanding of the results of the surveys. Simultaneously, taking a technical evaluation course, I started with a logic model. This process is truly the crux of all good evaluation. If you don’t understand what you are trying to accomplish, evaluation will mean very little.
Through the logic model, I learned invaluable information about the structure of the program and goals of the directors, funders, and participants for various investments in the program. The logic model process created a useful document that informed my evaluation knowledge and development.
Once the logic model was established, I could determine the goals of the evaluation and design a preliminary evaluation. Afterwards, through various consultations, I decided on a process that would gather the richest, most valuable information possible. Then the hard part began: data collection.
You would think that with all the dissidence around community work, and if it really makes neighborhood life better, more people would have jumped at the chance to give us their opinion, but there are challenges that evaluators face, especially in urban settings.
If you send surveys home, find a way for participants to return it free of charge. If you have people who are not able to attend meetings or events but are on your email lists, offer the survey online. Online survey tools are many and varied in their capabilities. Some common/popular ones are: Survey Monkey, Zoomerang, Constant Contact, Form Site, Kwik Surveys…the list goes on.
Once you jump the final hurdles and have enough responses, you can begin analysis of data and use the analysis to report results.
Depending on the survey design, you might need qualitative analysis or quantitative analysis. Qualitative analysis means you have results that are open-ended responses. That means people are able to give their own opinion on a question or statement, which offers some valuable feedback on programs or projects. However, you have to spend more time to find commonalities among answers.
The other type, quantitative analysis, is numerically based. This doesn’t always mean that the answers are numbers, but rather that all answers are standardized. This could be a multiple choice answer, true or false, a scale [(strongly)agree, (strongly)disagree, neutral] or a similar way to standardize answers. Demographic data falls into this category.
You can also use mixed analysis with qualitative and quantitative questions. This provides comments and opinions but also basic data like demographics, participation, and similar standard information.
Evaluation helps programs. Any results should be taken to heart and used to improve services.
Most importantly, you should be learning from evaluation. Evaluation will allow you to integrate your results by creating new practices and processes that will solve past problems and improve future positive results.
My Environmental Education Evaluation Resource Assistant has a good process cycle chart for evaluation:
What type(s) of analysis have worked best for your programs? Share in the comments below.