Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Sudharshan Seshadri on Mapping Resources for Conducting Useful Evaluations

Hello! My name is Sudharshan Seshadri and I am currently pursuing my Masters degree in Professional Studies specializing in Humanitarian Services Administration. My earlier post was a congregation of resources useful in conducting program evaluations catering to stakeholders of the project.

In this post, I had planned to address the concerns of mapping the available resources to conduct useful evaluations that aid decision makers and stakeholders of a project or an intervention. The word “resources” in this context covers the widths and depths of opportunities that afloat the characteristics of the program/project to the evaluator.

There are a few crucial points addressed below that construct the entirety of evaluation planning.

1)     Identifying the “evaluand”, through thematic evaluation documents / terms of reference, Organization portfolio (e – resources) and ex-ante / ex-post evaluation reports. This is immensely important because evaluators have to delineate the aims and objectives in a manner enabling the participants/stakeholders to understand the crux of the intervention.

2)     If it is a first attempt on a procedural evaluation, then an “evaluation crosswalk” might inform the participants of the evaluation.

3)     Identifying the inputs, outputs and outcomes and presenting it in a brief evaluation plan constitute in building coherent mechanisms, which aid both the decision makers and participant groups.

4)     Framing a logic model with the inputs, outputs and outcomes using the logical frame work analysis tools. This program theory has to be circulated as a controlled document to unite multiple task-force teams.

5)     Conducting an “Interim Evaluation Campaign”. This is to validate understanding and foster foreseeable impacts from the program which enhances formative evaluation capabilities.

6)     Creating stakeholders interests that draw several other capabilities in terms of resource utilization and meaningful data-analysis.

Lessons Learned:

  • It is easier to establish traceability along the process while evaluating the intervention.
  • Often in practice, we feel the need for feed-back mechanisms, but merely they exist within the code of conduct to be conceived and adhered by the evaluators.
  • Result-oriented tasks and value additions during the phases of evaluation is amplified to the pre-set satisfactory levels.

Hot tip: Evaluation campaigning draws potential candidates to the evaluation. Basically, it expands the efforts of a miniscule group to address an issue that often needs significant attention.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.