Evaluation is More Than Just Tracking. It Matters How We Use and Share It.

Screen Shot 2016-05-12 at 2.33.33 PM


Written by: Andrea N. Young, Evaluation Specialist

At the beginning of April, I released my masterpiece: the ‘2015 Annual Report on our Evaluation Findings.’ For most of you non-data-nerds, this 43-page document probably sounds like the utmost snoozefest. And often you’d be right: data reports can be oh-so-boring and pointless. I mean, who wants to read a jillion pages of technical jargon if it just ultimately makes you say “oh, that’s vaguely interesting” a couple of times, and then gets put on the shelf until you need some scrap doodle paper?

Writing the Report

To avoid the recycling-bin fate of my magnum opus, I set out to learn the basics of good report writing. After some research (like this humorous, but on-point, blog post, or this article with helpful reminders), I learned a few things that helped to make my report a bit more user-friendly:

  1. I started by asking my team what they needed from the report. I spoke with our fund developer, marketer, decision makers and program staff to find out what they needed from the report – and then I made sure that the report addressed those needs.
  2. Because the document was mostly for internal purposes, I kept the tone casual and used my own voice – I wrote like I speak, so it was more like a conversation than an academic paper. I kept the academic quality (references and all), but scrapped the uber-formal tone.
  3. I formatted the document consistently (so it was easy to follow) and used sub-headers that made the important bits obvious. For example, I included regular sections that I called “What does this mean for Propellus?” so that I could keep tying the data findings back to their practical applications.
  4. Following the advice of data visualization experts like DiY Data Design and Evergreen Data, I incorporated visuals – and not just graphs and tables. I tried to draw some of my own cartoons, and incorporated some of theirs.
  5. I learned that the report findings need to be shared in different formats for different audiences. So, while my team gets an executive summary and a full report, you, fine larger-audience, get a blogpost (and a highly visual annual report – this will be available at (and after) our AGM).

Each section of the report started by exploring what our data said (‘What?’). I did my best to whittle down the ‘What?’ and explain what it actually meant for us (‘So What?’). As a team, we explored and identified our top priorities and how we would move forward with them (‘Now What?’). Every step was considered within the context of our ‘why’. Basically, the process looked like this:

Screen Shot 2016-05-12 at 2.44.01 PM

To take you through a speedy version of this process, let’s start by exploring what we found in the top-most section, and work our way down.

Using the Report

‘What’ (did we learn)? And ‘So What’ (does this mean)? A lot! To further summarize the executive summary in the report (with the help of a couple tacky visuals), we learned the following:


What: Generally, our members are 🙂 with our services, and value what we offer

So What: Great! We’ll keep doing what’s working! We should celebrate and share our success!


What: After using our services, members report that they are ‘made stronger’ specifically through team-building, new knowledge, and improved clarity of identity and/or direction. So basically:Screen Shot 2016-05-12 at 2.34.00 PMSo What: We’re not entirely sure yet what this means for us, but we think it’s important: we need to continue learning about how organizations become strong so that we can continue tweaking our projects to ensure they all include the right quantities of the most critical ingredients.


What: Members need further support in implementing their capacity building plans/ knowledge/ etc.

So What: This finding has brought up a host of questions for the Member Services Team, which has supported a re-examination of the scope and nature of some of our services. As a result, we’ve got a few new services coming down the pipeline in the next few months!


What: Members (and non-members) need more free services

So What: Gotcha! On it.


What: Members want/need ongoing opportunities to connect with each other

So What: Noted – we’re working on it! And, we need to spend even more time connecting with other capacity-builders too (we’re working on that too – some exciting projects are brewing)!


What: Culture-related training is super effective with teams (and less-so with individuals)

So What: We’ve scrapped our culture workshop (as it delivered culture-related material to individuals rather than teams) and are looking at how we can make customized workshops (aka: ‘Facilitated Learning’) even more accessible to entire teams.


What: There is still some confusion around who we are and what we do

So What: Our members have often demonstrated that they want to promote us, but they don’t always have a clear understanding of the range of services we offer, or have access to the tools they need to effectively share with others what we do. We need to ‘get out there’ more, and ensure that we provide sufficient information and resources to explain who we are to our members and to the wider community.


What: Of the 187 unique organizations we served in 2015, 48% came back to us (for another workshop, learning community, consulting gig, etc.) at least one more time that same calendar year! That’s some impressive return customer-ing (thanks y’all)!

So What: Return customers are one of the best indicators of customer satisfaction. Our customers do come back, and often many times over. We know that they are (almost always) satisfied. So: we’ve scrapped most of the satisfaction measures on our evaluation forms, because actions speak louder than words. Instead, the evaluations are being used to ask more informative questions centred around quality control, outcome measurement, and critical self-reflection.


Sharing the Report

Now What? All of the findings in the report were summarized and listed in point form in a ‘Recommendations’ table at the end of the report. The ‘Recommendations’ table basically looked like this (except it had 21 recommendations, and not just one):

Screen Shot 2016-05-12 at 2.54.53 PM
To come up with our priorities, we developed a scoring rubric to help us rate the importance of addressing each challenge. Using the rubric, we each scored the listed challenges on our own. Then, taking an average of all of our scores, we were able to identify our collective priorities. (This ‘ranking’ tool, and a description of how to use it, is available through our workshop ‘The Proof is in the Pudding: Evaluation Basics’).

With our mission and vision always at the centre, we discussed our identified priorities and developed a plan of action. We’ll review our priorities again in a few months to check-in on our progress and tackle some things further down the list (as and when we perceive them to be necessary/ possible). Behind the scenes, we’re always tweaking what we do based on your feedback (so THANK YOU for always so graciously providing it), and by continuously re-focusing on our ‘why’.

Our ‘why’ is what drives us forward, and our evaluation findings, like the ones shared in this blog, are helping us to take well-informed next steps.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: