Written By: Andrea N. Young, Member Services, Evaluations Specialist
A few years ago, I set off for Africa to spend 6 months volunteering in a couple of the poorest countries in the world. My first stop was Lomé, the capital city of Togo (a tiny country next to Ghana in West Africa). I’d found a local NGO online that accepted me as a volunteer, and when I arrived they asked me to design, teach, and evaluate a short health curriculum at a couple of the nearby high schools.
Aside from the fact that health education is far from my area of expertise, I had also never single-handedly moved a program through all of the phases of design, implementation and evaluation… let alone in an impoverished, French-speaking country with very limited resources at my disposal.
I’ll be honest with you. Some aspects of the program were a near disaster. But, I learnt some important lessons about program design along the way (and in subsequent training/ experience, of course) that I think are worth sharing, so here goes!
Lesson 1: Do the heavy lifting before, not after, implementation. Create a plan, develop goals, do your research, etc. before you implement anything. It’s way easier to make changes and improvements to your program before you’ve started offering it. Any major issues with the design will be more visible if you actively seek them out before you hit the ground running.
Lesson 2: Don’t reinvent the wheel. This fits with the first point. If I’d done more research, I would have been far more aware of the free online resources at my disposal, and could have saved myself a lot of time and stress. There were also other charities in the area that were involved in similar work, and we really could have supported and learnt from each other. It’s also important to complete a needs assessment and ask yourself some hard questions before you invest too much time and energy. Questions like: Is there a real need for this service? How is my design unique and better than what already exists?
Lesson 3: Involve your stakeholders early and often. Developing a program in isolation reinforces any bias you may hold, and allows bad ideas to go unchallenged and many good ideas to be missed. There is no shortcut for the insight that diverse stakeholders can bring to the table. Not only are two heads better than one when it comes to idea generation, but you’ll also benefit from increased buy-in and awareness of your new program. People care more about what they’ve personally invested in. Even if your stakeholders don’t want to be involved, at least make the opportunity for them to say their two bits plainly available.
Lesson 4: Write it down. Keeping all of your ideas in your head is not a great plan: you forget things; things are miscommunicated or not communicated at all; your successor/ others involved in the project will be forced to do #2 should you leave… Why not complete a logic model and a theory of change to clearly describe your program? Not only are they effective ways of communicating your program, but they will also force you to think through and articulate your desired outcomes and how you’re going to get there. Which leads us to…
Lesson 5: Set clearly articulated goals. How will you know if you’ve achieved what you set out to achieve if you don’t clearly state from the outset what that is? Why not set some SMART goals or, even better, use your logic model to help you outline how you will measure your short term outcomes? Plan how you will measure your impact before the program even starts; it’s much easier to orient yourself towards (and to meet) known goals.
Lesson 6: Don’t set completely random numerical targets. This may sound silly, but I’ve come across this frequently in non-profits. If you don’t have a benchmark or research to support a numerical target, don’t set one during the first round of a new program if you can help it. If you state that you want to increase student scores by 10%, enrolment by 40 people, or reduce teen pregnancies by 20%, you should have some research (be it external literature, internal history, etc.) that suggests that these are reasonable goals. If you don’t, you’re just pulling numbers out of a hat and risking disappointment, over-celebrating non-successes, or even worse, losing funding (even though you may have actually accomplished something great!). Leave the numbers out of your targets until you have some information to support them.
Lesson 7: Do a trial run! If you can, test your program with a small group to iron out the major kinks before it goes out to the wider public. Sometimes this just means roping in an unassuming colleague to listen to your speech, or asking a group of dedicated clients to come in for a couple-hour focus group session to give you feedback before the program is officially launched. This fits with #1 and #3: do your research, and get people involved.
Lesson 8: Evaluate for internal benefit. As an Evaluations Specialist, I can’t stress this point enough. There is so much internal value that can come from program evaluations. Ultimately, every non-profit exists to better some type of social/ environmental/ etc. problem. Don’t you want to be sure that you are helping your clients in the best possible way? That you are getting the most bang for your buck? That your organization is reducing homelessness, finding foster homes for cats, or protecting watershed in the absolute best way possible? Of course you do, because you care so darn much about that person/ cat/ planet. Evaluations aren’t just for your funders. Use them to improve what you do.
Lesson 9: Look at your data. Because I’m so serious about #8, I feel the need to reiterate it another way (#sorrynotsorry). What’s the point of collecting data if it just sits in a beautiful excel file never to be looked at? I promise that you don’t have to be a brilliant statistician to pull interesting nuggets out of your data. Start small: where are the highest and lowest scores? Are there any patterns or relationships in your numbers? What might this mean, and how can it be used to influence your program design? I’d also refer back #3 here: get other people involved in data analysis. They will see things that you don’t, and may have some helpful theories or insight into why scores might be high or low.
Lesson 10: Use your data to inform program design changes. Because simply looking at the data and commenting “Oh darn! Our anti-drugs campaign is actually increasing drug use in 10-14 year olds!” is not actually going to improve the program. Obviously. As mentioned in #8, your data should be used to inform improvements to your program (or, in some cases, dissolution or the creation of new programs). Make an action plan, assign tasks, tackle the issues, and celebrate the successes!
What I’ve been trying to articulate here is: research what already exists, know what you want to accomplish, write down how and why you’ll do it, and get others involved. After the program has gone through a trial run, evaluate it and use the results to improve it.