Anyone with their eyes open today can’t help but wonder if those “gloom and doomers” might at least be partly right — should we be worried for our organizations’ survival?
And if so, with many arts organizations closing their doors, what we do to keep ours open?
For decades now, arts programs have gotten funded based on their case studies (we all have terrific stories, don’t we?) and assertions as to the benefits of the arts. And why not? Those benefits are real, and incredibly valuable. But case studies and avowals aren’t exactly tangible and they just aren’t cutting it any more.
TIME FOR A CHANGE?
Let’s face it — human beings do not like to change, but I’m not willing to bet I’ll be okay if I don’t, are you?
Well then, how can we change — what’s the direction to head in?
LET’S PROVE IT!
While “Prove It” has become funders’ new mantra, most arts groups are simply unprepared. How the heck do you prove those more intangible qualities that we all consider to be most important? How do you track and assess the light in a student’s eyes when he feels confident, empowered, and successful?
There are, of course, a number of evaluation software tools on the market today, but they’re incredibly tedious to learn and operate. Not only that, but they also typically evaluate generic qualities that would dumb down a good arts program, and we artists do tend to resist being dumbed down.
So here’s what we’ve been doing at Merge Education for the past few years: we’ve developed our own software. It’s available commercially now, but this post is about sharing some of the points we came up with that you can step off of to design your own evaluation — an evaluation that will help you survive.
How did we develop these points?
Because like most artists we really dislike superficiality, we worked with other educators, artists, mental health professionals, and evaluation scientists to drill down on the points that are actually the building blocks a person needs to develop in order to become more resilient and better able to learn.
As you’ll read in a moment, these points are essential to good human development, so in evaluating and assessing these, you’re not dumbing anything down.
DESIGNING YOUR EVALUATION TOOL
Where should you start? To clarify your thinking, break your questions down into three distinct areas.
First, look at the student’s relationship to herself — e.g., what is her level of her concentration and focus? Her motivation? Her consistency of effort?
Then, consider her relationship to the teacher. Does she, for example, listen well? Communicate her ideas?
Finally, take a look at how she’s developing her skills. Is she willing to try new steps? Identify correlations/relationships?
Set up a list of these and other points (we use a total of 15 for this scale), and establish a scoring mechanism for each of them on a scale of 1 to 5. For best results, provide anchor points (i.e., what each score should mean) so teachers’ answers won’t be random.
Once you’ve developed specific points to measure, you’ll have a good working observation tool. If you add a goal setting process to this you’ll be able to track each student more specifically, and your process will get even more effective and interesting.
Although our software integrates additional scales plus program management, if you do a thoughtful job with this one scale — thinking about and expanding on these points — you’ll have the data that every funder wants to see.
More importantly, you’ll have verifiable, specific, meaningful data, and when you have that kind of data not only do you have improved program oversight, you have the proof — and when you have the proof, you have survival.
It may get really tough in the next few years, but if we keep our eyes open, help each other, and do some digging in to prove it, we’ll stand a much better chance of making it.