AEA365 | A Tip-a-Day by and for Evaluators

CAT | Internal Evaluation

Greetings!  I am Robert Hoke, an independent evaluation consultant based in Indianapolis, Indiana and former chair of the Independent Consulting TIG.   I am a big proponent of shaking up the traditional conference session with a little “speed-dating.”

We have all sat through sessions in which presenters give a good presentation but then there is only time for one question.  (And usually the question is more likely a comment than a question.) All the audience members leave the room and move to the next session with no opportunity to meet the other audience members.   For the Independent Consulting TIG, our response has been to mix things up with one session that uses a fast-paced, more interactive, speed-dating approach.  (This technique is also known as SpeedGeeking.)

The Independent Consulting TIG has offered versions of this session nine times since Gail Barrington first shared the process in 2005.

The Format:

  • Tables of six to nine participants
  • Each table has a table leader/presenter. Each table leader presents on a different topic
  • Table leader presents their topic for 10-15 minutes. ½ of the time should be devoted to questions
  • Participants then move to another table
  • Repeat, Repeat, Speed Up, Repeat…..

The table leaders prepare a two-page summary of helpful hints and resources for the participants.  The written summary are very similar to the type of information included in the AEA365 blog:  best practices, helpful hints, key resources, etc.

What I like about “SpeedGeeking”:

  • Like a tapas restaurant, participants can “taste” a wild variety of topics in a short period.
  • The method allows for much more interaction between the participants and the speaker with each round being almost equally divided between presentation and questions.
  • Participants (especially introverts) are more likely to ask a question in a small group setting.
  • The opportunities for networking and information exchange are off the chart.

Hot Tips:

  • Avoid the temptation to make the session more “organized” by having the table-leaders move instead of the participants. The disorder of the transition between each round increases the energy and allows for participants to meet more people.
  • The length of each round can vary. For example, our sessions start with three fifteen minutes rounds and then four faster rounds of ten minutes.  The table-leaders become more comfortable with their topics and can move faster.
  • It is critical that the time-keeper is very firm about keeping things moving and also gives the table-leader a two-minute warning.

Rad Resources

The American Evaluation Association is celebrating Independent Consulting TIG Week with our colleagues in the IC AEA Topical Interest Group. The contributions all this week to aea365 come from our IC TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Charmagne Campbell-Patton and I am an independent evaluation consultant. About a year ago, I made the transition from my role as program manager and internal evaluator at an education nonprofit, to an external evaluation consultant. I continued working with my former employer as a client and, in my naiveté, I thought the transition would be relatively straightforward. I figured that since I knew the inner workings of the organization and had strong relationships with most staff members, it would be easy to continue to conduct useful evaluation.

Lessons Learned: My first mistake was failing to recognize and address that as the program manager, I used to be the primary intended user of the evaluation results. When I made the transition to an external consultant, I needed to be much more intentional about designing evaluations that met the needs of the new intended users.

Hot Tip: Be aware of how your position affects use. The personal factor is different in different relationships – internal and external.

Lesson Learned: Process use is different internally and externally. As a staff member, I used to be able to identify opportunities for process use in an ongoing and informal way. As an external consultant, however, I again had to be much more intentional about identifying opportunities and planning for process use.

Hot Tip: External evaluators need to be intentional about seeking opportunities to support evaluative thinking across the organization through more formalized process use.

Cool Trick: One way to engage staff is a reflective practice exercise. Bring staff together to reflect on the question: “What are things you know you should be doing but aren’t?” This question gets people thinking about potential personal barriers to using information. That sets the stage for discussing barriers to evaluation use organizationally. Next identify enabling factors that support and enhance use, and ways to overcome barriers to use.

It’s also worth noting that despite some of the challenges noted above, the transition from internal to external also gave me a new perspective on evaluation use. Once I recognized some of the barriers to use as an external consultant, I was actually able to use my position to promote use more effectively than I did while internal. The added distance gave me some leverage that I lacked as a staff member to call attention to opportunities and challenges to evaluation use across the organization.

Rad Resources: Essentials of Utilization-Focused Evaluation, Michael Quinn Patton, Sage (2012).

Consulting Start-Up and Management, Gail Barrington, Sage (2012).

Using Reflective Practice for Developmental Evaluation, Charmagne Campbell-Patton, AEA365 March 2015.

The American Evaluation Association is celebrating Evaluation Use (Eval Use) Topical Interest Group Week. The contributions all this week to aea365 come from our Eval Use TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

My name is Keiko Kuji-Shikatani, the current chair of the Evaluation Use Topical Interest Group (TIG), one of the original AEA TIGs. The Evaluation Use TIG was born of the interest in evaluation utilization in the 1970s, extending into both theoretical and empirical work on Use in the 1980s and 1990s, and to a broader conceptualization of use and influence in the 2000s. The Evaluation Use TIG is committed to understanding and enhancing the use of evaluation in a variety of contexts and to maximizing the positive influence of evaluation through both the evaluation process and the results produced.

Program evaluation began with the desire to seek information that can be utilized to improve the human condition. Use may not be apparent to those who are not internal to an organization since the process of using evaluation requires discussions that may be very sensitive in nature. This week’s AEA365 will examine how Evaluation Use TIG members are striving to support various efforts in diverse and complex contexts.

As for me, as an internal evaluator for the Ontario Ministry of Education, utilization of evaluation is something that is the norm in what I do every day in pursuit of reaching every student. The world in which our students are growing up and going to be leaders and learners throughout their lifetime is a complex and a quickly changing place. In order to support students so they are the best that they can be, those in the system needs to work smarter and use evaluative thinking to guide every facet of improvement efforts.

Rad Resource: Evaluative thinking is systematic, intentional and ongoing attention to expected results. It focuses on how results are achieved, what evidence is needed to inform future actions and how to improve future results. One cannot really discuss Evaluation Use without Michael Quinn Patton – check out (http://www.mcf.org/news/giving-forum/making-evaluation-meaningful).

Our work as internal evaluators involve continually communicating the value of evaluative thinking and guiding developmental evaluation (DE) by modeling the use of evidence to understand more precisely the needs of all students and to monitor and evaluate progress of improvement efforts.

Hot Tips: Check out how evaluation (http://edu.gov.on.ca/eng/teachers/studentsuccess/CCL_SSE_Report.pdf) is used to inform the next steps https://www.edu.gov.on.ca/eng/teachers/studentsuccess/strategy.html) and how that change can look like (http://edu.gov.on.ca/eng/research/EvidenceOfImprovementStudy.pdf).

In our work, the ongoing involvement of evaluators, who are intentionally embedded in program and policy development and implementation teams contribute to modeling evaluative thinking and guiding DE that build system evaluation capacity. The emphasis is on being a learning organization through evidence-informed, focused improvement planning and implementation.

Hot Tips: check out how evaluative thinking is embedded in professional learning (http://sim.abel.yorku.ca/ )or how evaluation thinking is embedded in improvement planning (http://www.edu.gov.on.ca/eng/policyfunding/memos/september2012/ImprovePlanAssessTool.pdf).

The American Evaluation Association is celebrating Evaluation Use (Eval Use) Topical Interest Group Week. The contributions all this week to aea365 come from our Eval Use TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, my name is Catherine Nameth, and I’m the Education Coordinator for an NSF- and EPA-funded research center at the University of California- Los Angeles. As Education Coordinator, my primary job is not evaluation, so I have to act creatively in order to integrate evaluation into my work and balance the need for internal evaluation with my other administrative and research responsibilities.

Hot Tip: Be an active learner and an active listener. Get to know your colleagues and their areas of expertise. Go to meetings, listen, and be open to learning about your colleagues and what they do. Your understanding of them and their work will inform your understanding of your organization as well as its people and programs/research. This understanding can then inform how you design surveys and collect evaluation data. People who know you are more likely to respond to your surveys and other “official” evaluation requests, and when they respond, you get the information you need!

Rad Resource: Map it out! Use Community Solutions’ map for “How Traditional Planning and Evaluation Interact.” This map displays how an evaluation logic model (inputs-activities-outputs-outcomes) situated horizontally interacts with program planning (goals-objectives-activities-time frame & budget) which is modeled vertically. In using this map, you’ll see that the “activities” of each model intersect, and this cohesive visual aid also serves as a reminder that program planning goals and evaluation outcomes should- and can- inform one another. Use this map to keep yourself focused, which is really important when your primary responsibilities include many aspects other than evaluation, and to help you show your organization’s leadership what you are doing and why you are doing it.

Hot Tip: Have an elevator pitch at the ready. When your work includes evaluation but is not entirely about evaluation, you need to be able to explain quickly and concisely what you are evaluating, why you are evaluating it, what information you need, and how your colleagues can help you by providing this needed information . . . which they will be more willing to do if they know you!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Megan Grimaldi, and I work for the Research, Evaluation, and Innovation department of Communities In Schools. This year, I was happy to serve as the program chair for the Internal Evaluation TIG at the AEA conference in Denver.

Lessons Learned: One of my favorite things about the AEA conference is that it offers evaluators from all different backgrounds and fields to come together and share their experiences. As an internal evaluator, my favorite conversation by far was the conversation around the many hats that an internal evaluator wears, and how internal evaluators balance their desire to promptly assist their colleagues and their desire to focus on the evaluations for which they are responsible.

The conversation started during the Internal Evaluation TIG meeting. Someone mentioned the short timelines that internal evaluators often face. Because we are internal to organizations, our colleagues, who may be a desk away, often feel comfortable coming to us and saying, “Can you get this analysis to me by close of business tomorrow?” Not only are deadlines sometimes rushed, there are times when we can be asked to do things tangentially related to our work. For example, many evaluators are fluent in data analysis. For internal evaluators, some of our coworkers may not be sure how to use a spreadsheet; their specialties might be in working with constituents in the field, or marketing, or fundraising. With our specialized knowledge, our role of evaluator may quickly evolve into a role as a teacher or tech guru.

I brought this topic up in a fantastic presentation, Engaging Stakeholders in Internal Evaluation. Kristina Moster and Erica Cooksey from the Cincinnati Children’s Hospital, and Danielle Marable and Erica Clarke from Massachusetts General Hospital, presented on ways to engage various stakeholders in conducting internal evaluation. They helped me reframe my thinking around urgent or special requests. It’s actually positive that coworkers feel comfortable approaching us. In some organizations, people do not even realize that there is an evaluator to approach! And if the task is not exactly “evaluation,” we can still turn the task into an opportunity to share ideas around evaluative thinking – and lay the groundwork for future evaluation projects. When you are an approachable internal evaluator, you build a rapport with your coworkers, and evaluation projects start to come your way. Communicating the parameters of your role will become easier once you have formed positive working relationships.

The American Evaluation Association is celebrating Internal Evaluation (IE) Topical Interest Group Week. The contributions all this week to aea365 come from our IE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Stanley Capela, currently Vice President for Quality Management and Corporate Compliance Officer for HeartShare Human Services of New York, a 140 million dollar multi-service organization.

As with most government funded organizations, we have to show we are compliant with regulations and at same time meet certain performance metrics. As a result, I am confronted with how to create a system that focuses on quality assurance that meets performance metrics and incorporates quality improvement process. Using graphs we identified a series of deficiencies and sites that had poor performance. Then we drilled down further identifying areas that were cited as repeat deficiencies by state auditors. With this information, we developed a series of trainings focused on those deficiencies. As a result, we reduced repeat deficiencies in developmental disabilities. The key was to graphically present the data in a way that we were able to pinpoint specific sites that had the problem and developed a plan to improve performance.

Hot Tip: When setting up an internal monitoring system, we focus and prioritize areas that require the program to be compliant with government agencies. We select five to ten items and develop performance metrics. For our child welfare programs we focused on a number of areas such as adoption finalizations, AWOLs, client contacts, service plan timeliness and length of stay. Next, we set up a dashboard with appropriate charts; convene leadership team; review reports; identify challenges; develop interventions; and review progress after three months. After reviewing data we pinpoint which sites fail to meet targets. Over time, program sees improvement and realizes data utilization can lead to positive change.

Lessons Learned: One major problem when using this approach is when you focus on too many areas you get bogged down and accomplish little or no improvement. Make sure everyone has clear understanding that we are a team and that we are not out to get you. Often program directors focus on placing blame as opposed to dealing with problem. The key is focusing on program staff owning the data and realizing there are successes as well as challenges. In other words, perceptions can make a difference on how you approach quality assurance and performance measurements as you create a quality improvement culture. The other major issue is making sure the facilitator and the individual preparing data is independent and separate from program.

Rad Resources: Quality Evaluation Template: How to Develop a Utilization Focused Evaluation System Incorporating QI and QA Systems by Stan Capela.

Council on Accreditation – look at the Performance Quality Improvement (PQI) standard.

Council on Quality Leadership and their method Personal Outcome Measures (POMS)

The American Evaluation Association is celebrating Internal Evaluation (IE) Topical Interest Group Week. The contributions all this week to aea365 come from our IE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello evaluation colleagues! We’re Rachel Albert, Vice President of Learning and Impact, and Laura Beals, Director of Evaluation, from Jewish Family and Children’s Service (Boston, MA). Our department is responsible for the internal evaluation of 44 programs, collectively serving over 17,000 people a year.

At JF&CS, we face two challenges in evaluation management. First, we have multiple external and internal stakeholders – including foundations, federal and state grantors, individual donors, agency leadership, program managers, and staff – each of whom has a different perspective on what sorts of data they need. Second, instead of grants dictating the evaluation resources available to each program, our department is funded by overhead. This means it’s up to us to apportion our department’s evaluation resources thoughtfully across all 44 programs for maximum benefit.

Lessons Learned: To meet this challenge, we developed a tool we call TIERS (“Tool for Intra-agency Evaluation Resource Sharing”). TIERS helps us leverage our resources on each program to answer the questions most relevant to its stakeholders.

Albert Beals

As you go higher in the pyramid, you are looking for stronger and stronger evidence that your program is achieving its intended impact. The pyramid is intended to be both cumulative and sequential: a program should not go up a tier until it has a robust implementation of the previous tier in place.

 Hot Tips:

  • This is not a race: It’s ok to stop at whatever the right tier is for a given program based on its evaluation needs and staff resources.
  • Higher tiers require more resources from both the internal evaluator and program staff.
  • Do not underestimate the difficulty of establishing even just a rigorous Tier 1 across a large agency!

We presented this tool in a demonstration session at Eval 14; check out the AEA e-library for our slides and handout.

Rad Resources: If you are looking for additional information about resource allocation for evaluation, here are a few places to start:

The American Evaluation Association is celebrating Internal Evaluation (IE) Topical Interest Group Week. The contributions all this week to aea365 come from our IE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello AEA365ers! We are Suzanne Markoe Hayes (Director) and Elaine Donato (Internal Evaluator) from the Evaluation and Research Department at Volunteers of America Greater Los Angeles (VOALA), a large non-profit organization whose mission is to enrich the lives of those in need.

One program we support is VOALA’s largest emergency shelter located in South Los Angeles— an area known for having the densest homeless population in Los Angeles County. As an initiative led by United Way Greater Los Angeles to end chronic homelessness by 2016, VOALA’s shelter joined homeless service providers in South L.A. to design and implement a Coordinated Entry System (CES). To develop such a system, participating service providers were required to join forces for the very first time. The collaborative was going to be a challenge due to the extensive history of homeless service providers in South L.A. having scarce resources and competing for the same scraps of funding.

Human service organizations are being asked to collaborate strategically to address social issues, and they must do so with their existing limited resources. For majority, this includes having no funding for a third-party evaluator and/or support from an internal evaluation department. Recognizing these limitations, VOALA contributed their Internal Evaluation team to assist with the collective impact of the South L.A. CES collaborative. We implemented a process evaluation to help identify the overarching collaborative goals, the processes that will occur, and to define each organization’s role. As a result, the South L.A. CES team successfully designed a unique system to link chronically homeless individuals in their community with the most appropriate services and housing.

Here are hot tips to implement a collaborative process evaluation:

Hot Tip #1: Make clear to all participating organizations that the evaluator is here to assist all agencies, not just own agency.

Hot Tip #2: Create process maps to help identify each organization’s role in the process. As a key element for continuous quality improvement (CQI), process maps can also be useful in tracking the activities related to achieving desired outcomes.

Markoe Hayes Donato

Hot Tip #3: Create a safe, open environment where team members are allowed to share their innovative ideas on how to better serve the target population and strengthen existing processes.

Hot Tip #4: Produce dashboard reports and share in biweekly meetings to inform decision-making and track team goals and desired outcomes.

Rad Resource: Check out the Center for Urban Community Services for their training CQI methods including process maps.

The American Evaluation Association is celebrating Internal Evaluation (IE) Topical Interest Group Week. The contributions all this week to aea365 come from our IE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Alicia McCoy and I am the Research and Evaluation Manager at Family Life. Family Life is an independent community organization that provides services to families, children and young people in Melbourne, Australia.

Engaging staff around evaluation can be challenging at the best of times, especially for internal evaluators who need to facilitate interest and motivation long-term. Over the years I have found that a little bit of humor and creativity goes a long way.

Hot Tip: For the most part, don’t take internal evaluation too seriously. The use of humor breaks down barriers between practice and evaluation. Using funny videos, cartoons and anecdotes during presentations is an effective way of getting your evaluation message across and assisting staff to understand and reflect on evaluation in a way that might not have been possible otherwise.

Hot Tip: Disrupt expectations about evaluation being “boring.” Hold fun activities to help build an evaluation culture. For example, we recently held a competition where teams were invited to write a story or statement about how they have used evaluation or evaluative thinking in practice. The initial promotion of the competition was a cryptic poster that appeared around offices stating “Does your Team like a challenge?” This was followed by a fun, anonymous, and slightly ambiguous poem that fuelled the discussion about what was to come. The full details of the competition were finally advertised a few weeks later. There were prizes for the most creative entry, the most informative, and a peer-awarded prize for most popular. It worked because it broke the pattern people expected from evaluation.

Hot Tip: First impressions are everything when it comes to communicating about evaluation internally. Using creative titles and introductions in communication messages about evaluation provide an oft-needed “hook”. Recent online communications we used that got staff talking include: The blind men and the elephant: a story told to an Australian, by and Indian-born Englishman, in South Africa, and what it might mean for us at Family Life (a parable was used to promote upcoming internal program planning and evaluation training); How can we learn from road intersections (an analogy of a poorly designed traffic light system was used to encourage staff to reflect on double-loop learning); Feedback: Balinese style! (a personal experience of being asked for customer feedback in Bali was shared to encourage staff to think about how they introduce feedback questionnaires to their clients). These communications appealed to people’s curiosity and they wanted to read on to find out what the message was about.

The American Evaluation Association is celebrating Internal Evaluation (IE) Topical Interest Group Week. The contributions all this week to aea365 come from our IE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Welcome to the AEA365 Internal Evaluation (IE) week! I’m Boris Volkov, a Co-Chair of IE TIG; also, a Co-Director for Monitoring & Evaluation with the University of Minnesota Clinical and Translational Science Institute and a faculty at the UMN School of Public Health. During this week, our colleagues from evaluation units in different organizations will share their tips and lessons learned implementing internal evaluation. External evaluators, stay tuned! You too will find useful things here! Today I would like to talk about expectations and responsibilities as related to internal evaluation.

Lessons Learned: Keep mutual expectations clear and open. Ask program managers and other key stakeholders about their expectations for your M&E team. Share your own, explicit expectations for your collaborative work with organization/program staff. Mutually agree on what is important and feasible in your working relationships. Also, solicit regular feedback from the program staff about M&E processes and outcomes.

Lesson Learned: Keep your stakeholder analysis ongoing. The list of stakeholders (the key ones, too) may change at any time, which means that priorities for – and perceptions of – your M&E work could change significantly, too. You may hear some day something like this: “The person that authorized this data collection/analysis/reporting is no longer with our organization. We don’t care much about these data any longer, and you evaluators are wasting your and staff time!” No matter how carefully and sophisticatedly you planned and executed your evaluation activity, its process and results may be rejected or ignored by those who have no buy-in in it. I would argue that your M&E activity has not been properly planned or executed if you never considered or if you lost sight of key stakeholders.

Lesson Learned: Contribute to evaluation’s habituation (integrating and reinforcing the importance of evaluation AND organizational capacity to do and use evaluation). Both openly and subtly, build evaluation capacity in your organization on different levels: organizational, program, and individual. Openly, when the organization embraces the idea of Evaluation Capacity Building, and subtly, when the leadership and/or staff believe that evaluation is the prerogative and responsibility of the evaluators only, as opposed to the idea of M&E as a shared responsibility. Some of you have heard this from your program staff: “It’s not OUR job to evaluate. It’s YOUR (M&E) responsibility and skill to know what, how, and when measure! Don’t make your problem our problem!” Keep in mind the “personal factor” and look for “evaluation champions” in your organization.

Finally, in dealing with different “forces,” “powers,” and “sides” in your challenging evaluation work, I wish you one of my favorite wishes (from the famed Star Wars): “May the Force be with you!”

The American Evaluation Association is celebrating Internal Evaluation (IE) Topical Interest Group Week. The contributions all this week to aea365 come from our IE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Older posts >>

Archives

To top