1 Comment · Posted by Susan Kistler in Collaborative, Participatory and Empowerment Evaluation, Internal Evaluation
We are Sue Hunter, a librarian and the Planning & Evaluation Coordinator with the National Network of Libraries of Medicine (NN/LM) Middle Atlantic Region (MAR) (http://nnlm.gov/mar/) at New York University, Langone Medical Center and Cindy Olney, Evaluation Specialist with the Outreach Evaluation Resource Center, NN/LM. Funded through the National Library of Medicine, NN/LM is a nationwide program for a network of health sciences libraries and information centers (called “network members”) with the goal of advancing the progress of medicine and improving public health through equal access to health information. The MAR supports network members in Delaware, New Jersey, New York, and Pennsylvania.
We embarked on a project of focus groups using Appreciative Inquiry to obtain feedback from network members on the NN/LM MAR program for the purpose of involving them in the MAR’s development of a 5-year contract proposal. Focus groups were conducted by staff who work in the NN/LM MAR program. Due to a short timeline, the focus groups were conducted online using Adobe Connect web conferencing software. The Appreciate Inquiry method was selected due to the format that would allow network members to focus their discussion on what is valuable to them within the realm of the MAR programs and services.
Hot Tip: Appreciative Inquiry is a useful tool for generating affirmative discussions in a focus group. Participants were able to describe peak experiences they had with the MAR program and services, and to pose concrete suggestions based on those experiences for future development in the MAR. We got the exact type of information we needed for our proposal, without a lot of “off-topic” discussion, allowing us to analyze the findings quickly and put them to use. The questions, which generated affirmative discussion, allowed for a comfortable and honest exchange between network members and the staff.
Lesson learned: The focus groups were conducted by the MAR staff. This allowed all staff to be included in the process and staff members obtained immediate feedback about their program areas directly from network members. The interview guide was simple and straightforward, so that even staff with minimal evaluation experience could participate.
Rad Resource: Adobe Connect web conferencing software. We conducted focus groups online using Adobe Connect which has a built in audio recorder. Sound quality is good, and the playback and pause options made transcription fairly easy. Conducting the focus groups online was convenient for the facilitator and participants. Adobe Connect is not a free tool, but one can request a free trial to explore its many options. http://www.adobe.com/products/acrobatconnectpro/
This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to firstname.lastname@example.org. Want to learn more from Sue and Cindy? They’ll be presenting as part of the Evaluation 2010 Conference Program, November 10-13 in San Antonio, Texas.
We are Rachel Leventon and Susan Wolfe, consultants at CNM Connect where we provide evaluation and capacity-building services and training to non-profit organizations in North Texas.
CNM Connect offers a Non-Profit Management Certificate. For this series, we developed and teach the seven-hour Program Planning and Evaluation Workshop. We combine three recognizable approaches to expose participants to basic program planning and evaluation concepts and we stress the importance of evaluable, thoughtfully-designed programs:
- We use Wiseman, Chinman, Ebener, Hunter, Imm, and Wandersman’s (2007) Getting to OutcomesTM 10-Step Model to frame the workshop.
- To deepen the program design portion of the workshop, we present John Gargani and Steward Donaldson’s Theory-Driven Program Design model, highlighting the development of Theories of Change and the importance of constituent values in program design.
- In the evaluation portion of the workshop, we introduce workshop participants to logic models using a modified version of Lien, Greenleaf, Lenke, Hakim, Swink, Wright and Meissen’s (2011) Tearless Logic Model.
Using these three resources in combination with CNM Connect’s outcomes-based program evaluation methodology, we ensure that the material is interesting and accessible to all participants, regardless of their background or specific interest in program planning or evaluation.
- To get started with the 10 steps, check out Getting to OutcomesTM: 10 Steps for Achieving Results-Based Accountability.
- Be on the lookout for Gargani and Donaldson’s forthcoming book: Practical Program Design and Redesign: A Theory-Driven Approach to Program Development and Developmental Evaluation.
- The Tearless Logic Model is presented in the December, 2011 issue of the Global Journal of Community Psychology Practice. Each issue of this open source, online, peer-reviewed journal presents articles, tools, and other information that evaluators will find useful.
- While there are many other great evaluation and program design models and tools available, we have found that introducing too many different models became confusing for students. Instead we provide a detailed list of additional resources they can explore if interested.
- To learn more about Theory Driven-Program Design and Theory-Driven Evaluation, attend the annual AEA Conference or AEA Summer Institute and look for sessions presented by John Gargani or Stewart Donaldson.
The American Evaluation Association is celebrating Nonprofit and Foundations TIG Week with our colleagues in the NPF AEA Topical Interest Group. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
We are Nichole Stewart and Laura Pryor and we’d like to share a preview of our presentation at the upcoming AEA 2013 conference. Our session, Performance Management to Program Evaluation: Creating a Complimentary Connection, will use a case study of a Los Angeles-based juvenile offender reentry program to demonstrate how “information and knowledge production” can be coordinated for performance management (PM) and program evaluation (PE).
Lessons Learned: There IS a difference!
Distinguishing between PM and PE has historically presented challenges for program directors and the public agencies and non-profit organizations that fund them. Programs have to grapple with day-to-day operations as well as adapting to evolving frameworks for understanding “what works”—from results-based accountability to continuous quality improvement to evidence-based everything. Evaluators are frequently called upon to engage simultaneously in both PM and PE, however the distinctions between the tasks are not always clearly understood or articulated in practice.
Lessons Learned: There IS a connection!
Fortunately, several authors have explored the relationship between PM and PE and outlined how PM and PE can complement one another with regard to data collection and analysis:
- Information complementarity– Use the same data to answer different questions based on different analyses (Kusek and Rist, 2004).
- Methodical complementarity– Use similar processes and tools to collect and analyze data and ultimately convert data into actionable information (Nielsen and Ejler, 2008).
- Sign in to www.eval.org and access the Spring 2013 issue of New Directions for Evaluation, Special Issue: Performance Management and Evaluation.
- Use the first chapter of Program Evaluation & Performance Measurement as a PM and PE primer.
- Download a free pdf of David E.K. Hunter’s Working Hard & Working Well.
- Read the Performance Management and Evaluation: What’s the Difference research brief released by Child Trends.
Source: Child Trends, Research-to-Results Brief (January 2011)
- To assist clients with implementing a new PM system, create a “Performance Measurement Activity Calendar” that outlines when data will be collected, who will be collecting it, how it will be stored, and when it will be analyzed.
- Be sure to attend our session in Oak Lawn on October 18th at 11:00 as well as other sessions that cover these topics in-depth. A few that we’re looking forward to: Choosing the Right Database Software, Integrating evaluation and Performance Management to Inform Evidence-based Decision-making, Evaluating Workforce Development Programs: What are We Measuring? What are We Missing? and Technical Assistance: Approaches to Collecting Actionable Evaluation Data.
The American Evaluation Association is celebrating Business, Leadership and Performance (BLP) TIG Week with our colleagues in the BLP AEA Topical Interest Group. The contributions all this week to aea365 come from our BLP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to firstname.lastname@example.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. Want to learn more from Nichole and Laura? They’ll be presenting as part of the Evaluation 2013 Conference Program, October 16-19 in Washington, DC.