Kia ora koutou! My name is Jane Davidson and I run an evaluation consulting and capacity building business based in New Zealand. My main clients are NZ central government agencies across multiple sectors (health, education, leadership development, corrections, etc), but I also work with various other kinds of organizations and run evaluation training as well.
Hot Tip: Following on from Jack Mills’ aea365 blog post of 1/7 about project status reports, another that I find very useful is a skeleton report. This is an actual report, minus the content, that is written quite early in the evaluation project showing all the headings/sections and a brief description of which piece of information drops in where. This ensures that (1) the client knows exactly what they are going to get as a deliverable – and if the proposed product doesn’t quite meet their needs we can negotiate that earlier rather than have them disappointed once the report comes in and (2) as the evaluator, I (and anyone I am working with) can see exactly where each piece of evidence is going to fit into the overall puzzle – no more “spare parts” of data that I wasted informants’ (and my) time collecting AND no more “Oops, we don’t have quite enough evidence to answer this question but we have more than we need to answer that one!” I’ve found skeleton reports to be an incredibly useful discipline for making sure that every piece of evidence counts and that I do end up with all that I need to answer the evaluation questions.
Hot Tip: A lot of my reports I structure around the evaluation questions themselves, i.e. one major section for each question. In each section the criteria and evidence are laid out, transparently interpreted, and woven back together to provide a direct, evaluative answer to each question. Similarly, the executive summary consists of 2 pages with 7 +/- 2 short paragraphs, each of which presents one of the 7 +/- 2 overarching evaluation questions followed by a succinct, direct, and explicitly evaluative answer. There’s a bit more about these ideas in an earlier online Journal of MultiDisciplinary Evaluation (JMDE) article and my recent AEA 09 presentation mentioned by Amy Germuth in her aea365 blog post of 1/1. And I’ll talk some more about this, if there’s interest, the week of 17-23 January (US time; that’s 18-24th for those on the Asia/South Pacific side of the dateline) when I’m online as AEA’s “thought leader” – please join the asynchronous discussion for the week!
As Jane notes, she will be the discussant for the AEA Thought Leaders Series the week of January 17-23 for those who wish to connect with her directly. Learn more and sign up at http://www.eval.org/thought_leaders.asp
Kseniya, it just occurred to me that a specific example might help.
I recently wrapped up a huge evaluation of a senior leadership and management development strategy (called SLMD) that spans over 100 agencies and organizations in the New Zealand State Services.
The initial list of evaluation questions was enormous, well over 100. This is what happens when you talk to a LOT of stakeholders and ask them what they want to know!
To scope it down to size, we went through a process to identify the “must know” questions and got the list down to 21 questions (although some of these had 2-3 questions packed into them).
I then sorted the 21 questions into 7 groups and wrote a broader, overarching question to cover each group. Here’s one of those groups of questions:
a. To what extent has SLMD added to the strength and diversity of the State sector’s leadership pool? How does this contribution compare with (i) the initially identified need/target and (ii) what could reasonably have been achieved without SLMD?
b. What changes are being seen in the quality, quantity and diversity of the initial pool of applicants to CE and other very senior positions? Is there more recruiting for senior positions within the public sector? How many SLMD programme participants or graduates are appearing in this pool? How many positions have been appointed from the initial pool, and how does this compare with pre-SLMD searches?
c. How many senior leadership appointments have been made (i) from within the State sector and (ii) of an individual who has participated in one or more SLMD offerings? For (ii), to what extent did SLMD contribute to those individuals’ viability as candidates at the time of appointment?
The overarching question I wrote to cover this set was:
How effective is SLMD overall for building a strong and diverse leadership pool for the State Services?
The important point here was that the overarching question in particular had to be explicitly evaluative (even if the stakeholder-authored questions might have been less so) because that’s what would push me to not just trot out the statistics and stories but to be explicit about what it all added up to, i.e. to evaluate, not just describe/research.
Hope this helps clarify. 🙂
Jane
Hi Kseniya.
Yes, I often end up with a large list of evaluation questions, so what I do is sort them into about 7 +/- 2 categories. Each category has a broad, overarching evaluation question, and under that category is a series of more detailed questions. I only answer the big picture, overarching questions in the exec summary (although as supporting evidence I may refer to something very specific), otherwise the exec summary gets too long and people don’t read it. But in the main body of the report we get right down to specifics on all the evaluation questions.
As an aside, prior to launching into the project I include a day-long (or more) session with clients to discuss and/or develop evaluation questions. If the list they put forward is extremely long, there is usually a prioritisation and scoping phase where we flag “must know” questions and distinguish them from “well worth knowing” and “nice to know” questions. Depending on budget, I commit to answering all the “must know” questions, as many of the “well worth knowing” as feasible, and only go after the “nice to know” ones if it’s extremely easy/cheap/quick to do so.
Hope this helps! And, thanks for responding!
Jane
Thank you so much for sharing this. What would you recommend if the TOR contains more evaluation questions? In my practice it is often 20 to 30 or more. Wouldn’t the proposed report structure lead to reduction of detail requested by stakeholders?
Pingback: Tweets that mention Jane Davidson on Evaluation Reporting - AEA365 -- Topsy.com
Thanks so much, Stephanie! 🙂 Hope to ‘see’ you next week in the thought leader discussion.
I have found Jane’s article in JMDE on unlearning our social science habits very useful in helping others understand why conventional report formats often don’t cut it for our clients. Thanks, Jane.