SEA Affiliate Week: Tips for Engaging Stakeholders in Logic Modeling by Moya Alfonso

My name is Moya Alfonso, and I’m an Associate Professor at the Jiann-Ping Hsu College of Public Health at Georgia Southern University. I would like to share a few tips on logic modeling and how to effectively engage stakeholders in each step of the process based on my eighteen years of experience engaging community stakeholders in community health research and evaluation.

Logic modeling is more than just determining inputs, activities, outputs, and activities. When working in community settings, logic modeling should be a collaborative process that engages stakeholders in developing the mission, vision, and visual representation of their program from start to finish. Stakeholders can help specify programmatic activities and related outputs, and delineate short-, middle-, and long-term program outcomes. DoView is a great low-cost tool to aid in the collaborative logic-modeling process.

Hot Tips:

Below are some tips for engaging stakeholders throughout the logic-modeling process:

  1. Establish a diverse stakeholder advisory group: Community stakeholders have a range of skills to bring to the table that can contribute to the evaluation process. Incorporate active-learning strategies when developing the advisory group that result in usable information for the logic model. For example, having advisory members develop an elevator speech can help inform the program mission and vision that will guide logic model development.
  2. Engage stakeholders in meaningful discussion: In addition to reviewing program documents, stakeholder discussion should inform logic-model development. A focus group discussion at the beginning of the logic-modeling process could serve as a critical foundation for logic-model development. For example, you could ask the advisory group to think back to the last time their program worked well and what happened as a result. This could illuminate key program outcomes to include in the model.
  3. Don’t be afraid to get creative: Effective collaborative logic-modeling may require you to spread large sheets of paper across a conference room table accompanied with a bunch of brightly colored markers. Rather than taking the standard linear approach to logic modeling, have advisory members think creatively about the structure of the logic model. They key is to create a visual representation of the program and its outcomes that makes sense to the advisory board. This will increase understanding and buy-in and will improve implementation fidelity.
  4. Hand over the keys: Collaborative approaches to logic modeling require you – the evaluator – to get out of the driver’s seat and hand over the keys to the advisory group. This can be challenging for evaluators who are used to being in complete control over the logic-modeling process. By working with the advisory group in a collaborative process, a more powerful visual representation and greater understanding of the program will result!

The American Evaluation Association is celebrating Southeast Evaluation Association (SEA) Affiliate Week with our colleagues in the SEA Affiliate. The contributions all this week to aea365 come from SEA Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

1 thought on “SEA Affiliate Week: Tips for Engaging Stakeholders in Logic Modeling by Moya Alfonso”

  1. Brendan Narancsik

    Hello Ms. Alfonso,

    I’m currently taking an introductory course to Program Evaluation as a requirement for my Masters of Education. As the subject of evaluation is very new to me, I’ve been very appreciative of the quality and variety of the articles I’ve read on AEA365. They have helped to provide clarity for a subject that at times, for a new student in the field, appears quite dense and complex.
    I chose to respond to your article because I feel that you espouse the approach to evaluation – specifically within the context of logic modeling – that makes the most sense to me. Numerous readings in my course have focused on the purpose of evaluation. I was somewhat turned off by those researchers who have suggested an approach to evaluation that strives for nothing more than accuracy. As such, I appreciated the view of Patton, who “advocated an active role for evaluators in promoting and cultivating use. Evaluators, he maintained, had responsibilities to (a) help decision makers identify their evaluation needs, (b) determine, in consultation, what information would best address those needs, and (c) generate findings that would yield the type of information needed by the intended user” (Shulha & Cousins, 1997). Your post reflects this notion perfectly and I appreciate the perspective and the tips you provide.
    I have a few questions, and if time allows, would appreciate your thoughts in response.

    To what extent do you see technology impacting the creative approach that you outlined in point 3? You had mentioned DoView as a potential tool to aid the collaborative process. Are there other tech tools that you have used that you have found effective?
    Your final point I would imagine many evaluators find difficult. When giving up control, does the evaluator still seek to guide the direction that the logic model will go? I suppose what I’m asking is does the evaluator always need to act as a facilitator or do they allow the process to evolve completely organically and collaboratively and accept whatever the final result is? If so, is there a potential for the logic model to take on a form that the evaluator may find problematic?
    When establishing an advisory group, is there an ideal number of participants that should be involved in the process? Obviously more people means more ideas and greater input, but I feel that this may also lead to greater difficulty reaching a consensus. In your experience, have you found this to be the case?

    Many thanks for an informative posting.

    Brendan

    Reference:
    Shulha, L., & Cousins, B. (1997). Evaluation use: Theory, research and practice since 1986. Evaluation Practice, 18, 195-208.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.