AEA365 | A Tip-a-Day by and for Evaluators

CAT | Internal Evaluation

My name is Keiko Kuji-Shikatani, the current chair of the Evaluation Use Topical Interest Group (TIG), one of the original AEA TIGs. The Evaluation Use TIG was born of the interest in evaluation utilization in the 1970s, extending into both theoretical and empirical work on Use in the 1980s and 1990s, and to a broader conceptualization of use and influence in the 2000s. The Evaluation Use TIG is committed to understanding and enhancing the use of evaluation in a variety of contexts and to maximizing the positive influence of evaluation through both the evaluation process and the results produced.

Program evaluation began with the desire to seek information that can be utilized to improve the human condition. Use may not be apparent to those who are not internal to an organization since the process of using evaluation requires discussions that may be very sensitive in nature. This week’s AEA365 will examine how Evaluation Use TIG members are striving to support various efforts in diverse and complex contexts.

As for me, as an internal evaluator for the Ontario Ministry of Education, utilization of evaluation is something that is the norm in what I do every day in pursuit of reaching every student. The world in which our students are growing up and going to be leaders and learners throughout their lifetime is a complex and a quickly changing place. In order to support students so they are the best that they can be, those in the system needs to work smarter and use evaluative thinking to guide every facet of improvement efforts.

Rad Resource: Evaluative thinking is systematic, intentional and ongoing attention to expected results. It focuses on how results are achieved, what evidence is needed to inform future actions and how to improve future results. One cannot really discuss Evaluation Use without Michael Quinn Patton – check out (http://www.mcf.org/news/giving-forum/making-evaluation-meaningful).

Our work as internal evaluators involve continually communicating the value of evaluative thinking and guiding developmental evaluation (DE) by modeling the use of evidence to understand more precisely the needs of all students and to monitor and evaluate progress of improvement efforts.

Hot Tips: Check out how evaluation (http://edu.gov.on.ca/eng/teachers/studentsuccess/CCL_SSE_Report.pdf) is used to inform the next steps https://www.edu.gov.on.ca/eng/teachers/studentsuccess/strategy.html) and how that change can look like (http://edu.gov.on.ca/eng/research/EvidenceOfImprovementStudy.pdf).

In our work, the ongoing involvement of evaluators, who are intentionally embedded in program and policy development and implementation teams contribute to modeling evaluative thinking and guiding DE that build system evaluation capacity. The emphasis is on being a learning organization through evidence-informed, focused improvement planning and implementation.

Hot Tips: check out how evaluative thinking is embedded in professional learning (http://sim.abel.yorku.ca/ )or how evaluation thinking is embedded in improvement planning (http://www.edu.gov.on.ca/eng/policyfunding/memos/september2012/ImprovePlanAssessTool.pdf).

The American Evaluation Association is celebrating Evaluation Use (Eval Use) Topical Interest Group Week. The contributions all this week to aea365 come from our Eval Use TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, my name is Catherine Nameth, and I’m the Education Coordinator for an NSF- and EPA-funded research center at the University of California- Los Angeles. As Education Coordinator, my primary job is not evaluation, so I have to act creatively in order to integrate evaluation into my work and balance the need for internal evaluation with my other administrative and research responsibilities.

Hot Tip: Be an active learner and an active listener. Get to know your colleagues and their areas of expertise. Go to meetings, listen, and be open to learning about your colleagues and what they do. Your understanding of them and their work will inform your understanding of your organization as well as its people and programs/research. This understanding can then inform how you design surveys and collect evaluation data. People who know you are more likely to respond to your surveys and other “official” evaluation requests, and when they respond, you get the information you need!

Rad Resource: Map it out! Use Community Solutions’ map for “How Traditional Planning and Evaluation Interact.” This map displays how an evaluation logic model (inputs-activities-outputs-outcomes) situated horizontally interacts with program planning (goals-objectives-activities-time frame & budget) which is modeled vertically. In using this map, you’ll see that the “activities” of each model intersect, and this cohesive visual aid also serves as a reminder that program planning goals and evaluation outcomes should- and can- inform one another. Use this map to keep yourself focused, which is really important when your primary responsibilities include many aspects other than evaluation, and to help you show your organization’s leadership what you are doing and why you are doing it.

Hot Tip: Have an elevator pitch at the ready. When your work includes evaluation but is not entirely about evaluation, you need to be able to explain quickly and concisely what you are evaluating, why you are evaluating it, what information you need, and how your colleagues can help you by providing this needed information . . . which they will be more willing to do if they know you!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Megan Grimaldi, and I work for the Research, Evaluation, and Innovation department of Communities In Schools. This year, I was happy to serve as the program chair for the Internal Evaluation TIG at the AEA conference in Denver.

Lessons Learned: One of my favorite things about the AEA conference is that it offers evaluators from all different backgrounds and fields to come together and share their experiences. As an internal evaluator, my favorite conversation by far was the conversation around the many hats that an internal evaluator wears, and how internal evaluators balance their desire to promptly assist their colleagues and their desire to focus on the evaluations for which they are responsible.

The conversation started during the Internal Evaluation TIG meeting. Someone mentioned the short timelines that internal evaluators often face. Because we are internal to organizations, our colleagues, who may be a desk away, often feel comfortable coming to us and saying, “Can you get this analysis to me by close of business tomorrow?” Not only are deadlines sometimes rushed, there are times when we can be asked to do things tangentially related to our work. For example, many evaluators are fluent in data analysis. For internal evaluators, some of our coworkers may not be sure how to use a spreadsheet; their specialties might be in working with constituents in the field, or marketing, or fundraising. With our specialized knowledge, our role of evaluator may quickly evolve into a role as a teacher or tech guru.

I brought this topic up in a fantastic presentation, Engaging Stakeholders in Internal Evaluation. Kristina Moster and Erica Cooksey from the Cincinnati Children’s Hospital, and Danielle Marable and Erica Clarke from Massachusetts General Hospital, presented on ways to engage various stakeholders in conducting internal evaluation. They helped me reframe my thinking around urgent or special requests. It’s actually positive that coworkers feel comfortable approaching us. In some organizations, people do not even realize that there is an evaluator to approach! And if the task is not exactly “evaluation,” we can still turn the task into an opportunity to share ideas around evaluative thinking – and lay the groundwork for future evaluation projects. When you are an approachable internal evaluator, you build a rapport with your coworkers, and evaluation projects start to come your way. Communicating the parameters of your role will become easier once you have formed positive working relationships.

The American Evaluation Association is celebrating Internal Evaluation (IE) Topical Interest Group Week. The contributions all this week to aea365 come from our IE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Stanley Capela, currently Vice President for Quality Management and Corporate Compliance Officer for HeartShare Human Services of New York, a 140 million dollar multi-service organization.

As with most government funded organizations, we have to show we are compliant with regulations and at same time meet certain performance metrics. As a result, I am confronted with how to create a system that focuses on quality assurance that meets performance metrics and incorporates quality improvement process. Using graphs we identified a series of deficiencies and sites that had poor performance. Then we drilled down further identifying areas that were cited as repeat deficiencies by state auditors. With this information, we developed a series of trainings focused on those deficiencies. As a result, we reduced repeat deficiencies in developmental disabilities. The key was to graphically present the data in a way that we were able to pinpoint specific sites that had the problem and developed a plan to improve performance.

Hot Tip: When setting up an internal monitoring system, we focus and prioritize areas that require the program to be compliant with government agencies. We select five to ten items and develop performance metrics. For our child welfare programs we focused on a number of areas such as adoption finalizations, AWOLs, client contacts, service plan timeliness and length of stay. Next, we set up a dashboard with appropriate charts; convene leadership team; review reports; identify challenges; develop interventions; and review progress after three months. After reviewing data we pinpoint which sites fail to meet targets. Over time, program sees improvement and realizes data utilization can lead to positive change.

Lessons Learned: One major problem when using this approach is when you focus on too many areas you get bogged down and accomplish little or no improvement. Make sure everyone has clear understanding that we are a team and that we are not out to get you. Often program directors focus on placing blame as opposed to dealing with problem. The key is focusing on program staff owning the data and realizing there are successes as well as challenges. In other words, perceptions can make a difference on how you approach quality assurance and performance measurements as you create a quality improvement culture. The other major issue is making sure the facilitator and the individual preparing data is independent and separate from program.

Rad Resources: Quality Evaluation Template: How to Develop a Utilization Focused Evaluation System Incorporating QI and QA Systems by Stan Capela.

Council on Accreditation – look at the Performance Quality Improvement (PQI) standard.

Council on Quality Leadership and their method Personal Outcome Measures (POMS)

The American Evaluation Association is celebrating Internal Evaluation (IE) Topical Interest Group Week. The contributions all this week to aea365 come from our IE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello evaluation colleagues! We’re Rachel Albert, Vice President of Learning and Impact, and Laura Beals, Director of Evaluation, from Jewish Family and Children’s Service (Boston, MA). Our department is responsible for the internal evaluation of 44 programs, collectively serving over 17,000 people a year.

At JF&CS, we face two challenges in evaluation management. First, we have multiple external and internal stakeholders – including foundations, federal and state grantors, individual donors, agency leadership, program managers, and staff – each of whom has a different perspective on what sorts of data they need. Second, instead of grants dictating the evaluation resources available to each program, our department is funded by overhead. This means it’s up to us to apportion our department’s evaluation resources thoughtfully across all 44 programs for maximum benefit.

Lessons Learned: To meet this challenge, we developed a tool we call TIERS (“Tool for Intra-agency Evaluation Resource Sharing”). TIERS helps us leverage our resources on each program to answer the questions most relevant to its stakeholders.

Albert Beals

As you go higher in the pyramid, you are looking for stronger and stronger evidence that your program is achieving its intended impact. The pyramid is intended to be both cumulative and sequential: a program should not go up a tier until it has a robust implementation of the previous tier in place.

 Hot Tips:

  • This is not a race: It’s ok to stop at whatever the right tier is for a given program based on its evaluation needs and staff resources.
  • Higher tiers require more resources from both the internal evaluator and program staff.
  • Do not underestimate the difficulty of establishing even just a rigorous Tier 1 across a large agency!

We presented this tool in a demonstration session at Eval 14; check out the AEA e-library for our slides and handout.

Rad Resources: If you are looking for additional information about resource allocation for evaluation, here are a few places to start:

The American Evaluation Association is celebrating Internal Evaluation (IE) Topical Interest Group Week. The contributions all this week to aea365 come from our IE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello AEA365ers! We are Suzanne Markoe Hayes (Director) and Elaine Donato (Internal Evaluator) from the Evaluation and Research Department at Volunteers of America Greater Los Angeles (VOALA), a large non-profit organization whose mission is to enrich the lives of those in need.

One program we support is VOALA’s largest emergency shelter located in South Los Angeles— an area known for having the densest homeless population in Los Angeles County. As an initiative led by United Way Greater Los Angeles to end chronic homelessness by 2016, VOALA’s shelter joined homeless service providers in South L.A. to design and implement a Coordinated Entry System (CES). To develop such a system, participating service providers were required to join forces for the very first time. The collaborative was going to be a challenge due to the extensive history of homeless service providers in South L.A. having scarce resources and competing for the same scraps of funding.

Human service organizations are being asked to collaborate strategically to address social issues, and they must do so with their existing limited resources. For majority, this includes having no funding for a third-party evaluator and/or support from an internal evaluation department. Recognizing these limitations, VOALA contributed their Internal Evaluation team to assist with the collective impact of the South L.A. CES collaborative. We implemented a process evaluation to help identify the overarching collaborative goals, the processes that will occur, and to define each organization’s role. As a result, the South L.A. CES team successfully designed a unique system to link chronically homeless individuals in their community with the most appropriate services and housing.

Here are hot tips to implement a collaborative process evaluation:

Hot Tip #1: Make clear to all participating organizations that the evaluator is here to assist all agencies, not just own agency.

Hot Tip #2: Create process maps to help identify each organization’s role in the process. As a key element for continuous quality improvement (CQI), process maps can also be useful in tracking the activities related to achieving desired outcomes.

Markoe Hayes Donato

Hot Tip #3: Create a safe, open environment where team members are allowed to share their innovative ideas on how to better serve the target population and strengthen existing processes.

Hot Tip #4: Produce dashboard reports and share in biweekly meetings to inform decision-making and track team goals and desired outcomes.

Rad Resource: Check out the Center for Urban Community Services for their training CQI methods including process maps.

The American Evaluation Association is celebrating Internal Evaluation (IE) Topical Interest Group Week. The contributions all this week to aea365 come from our IE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Alicia McCoy and I am the Research and Evaluation Manager at Family Life. Family Life is an independent community organization that provides services to families, children and young people in Melbourne, Australia.

Engaging staff around evaluation can be challenging at the best of times, especially for internal evaluators who need to facilitate interest and motivation long-term. Over the years I have found that a little bit of humor and creativity goes a long way.

Hot Tip: For the most part, don’t take internal evaluation too seriously. The use of humor breaks down barriers between practice and evaluation. Using funny videos, cartoons and anecdotes during presentations is an effective way of getting your evaluation message across and assisting staff to understand and reflect on evaluation in a way that might not have been possible otherwise.

Hot Tip: Disrupt expectations about evaluation being “boring.” Hold fun activities to help build an evaluation culture. For example, we recently held a competition where teams were invited to write a story or statement about how they have used evaluation or evaluative thinking in practice. The initial promotion of the competition was a cryptic poster that appeared around offices stating “Does your Team like a challenge?” This was followed by a fun, anonymous, and slightly ambiguous poem that fuelled the discussion about what was to come. The full details of the competition were finally advertised a few weeks later. There were prizes for the most creative entry, the most informative, and a peer-awarded prize for most popular. It worked because it broke the pattern people expected from evaluation.

Hot Tip: First impressions are everything when it comes to communicating about evaluation internally. Using creative titles and introductions in communication messages about evaluation provide an oft-needed “hook”. Recent online communications we used that got staff talking include: The blind men and the elephant: a story told to an Australian, by and Indian-born Englishman, in South Africa, and what it might mean for us at Family Life (a parable was used to promote upcoming internal program planning and evaluation training); How can we learn from road intersections (an analogy of a poorly designed traffic light system was used to encourage staff to reflect on double-loop learning); Feedback: Balinese style! (a personal experience of being asked for customer feedback in Bali was shared to encourage staff to think about how they introduce feedback questionnaires to their clients). These communications appealed to people’s curiosity and they wanted to read on to find out what the message was about.

The American Evaluation Association is celebrating Internal Evaluation (IE) Topical Interest Group Week. The contributions all this week to aea365 come from our IE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Welcome to the AEA365 Internal Evaluation (IE) week! I’m Boris Volkov, a Co-Chair of IE TIG; also, a Co-Director for Monitoring & Evaluation with the University of Minnesota Clinical and Translational Science Institute and a faculty at the UMN School of Public Health. During this week, our colleagues from evaluation units in different organizations will share their tips and lessons learned implementing internal evaluation. External evaluators, stay tuned! You too will find useful things here! Today I would like to talk about expectations and responsibilities as related to internal evaluation.

Lessons Learned: Keep mutual expectations clear and open. Ask program managers and other key stakeholders about their expectations for your M&E team. Share your own, explicit expectations for your collaborative work with organization/program staff. Mutually agree on what is important and feasible in your working relationships. Also, solicit regular feedback from the program staff about M&E processes and outcomes.

Lesson Learned: Keep your stakeholder analysis ongoing. The list of stakeholders (the key ones, too) may change at any time, which means that priorities for – and perceptions of – your M&E work could change significantly, too. You may hear some day something like this: “The person that authorized this data collection/analysis/reporting is no longer with our organization. We don’t care much about these data any longer, and you evaluators are wasting your and staff time!” No matter how carefully and sophisticatedly you planned and executed your evaluation activity, its process and results may be rejected or ignored by those who have no buy-in in it. I would argue that your M&E activity has not been properly planned or executed if you never considered or if you lost sight of key stakeholders.

Lesson Learned: Contribute to evaluation’s habituation (integrating and reinforcing the importance of evaluation AND organizational capacity to do and use evaluation). Both openly and subtly, build evaluation capacity in your organization on different levels: organizational, program, and individual. Openly, when the organization embraces the idea of Evaluation Capacity Building, and subtly, when the leadership and/or staff believe that evaluation is the prerogative and responsibility of the evaluators only, as opposed to the idea of M&E as a shared responsibility. Some of you have heard this from your program staff: “It’s not OUR job to evaluate. It’s YOUR (M&E) responsibility and skill to know what, how, and when measure! Don’t make your problem our problem!” Keep in mind the “personal factor” and look for “evaluation champions” in your organization.

Finally, in dealing with different “forces,” “powers,” and “sides” in your challenging evaluation work, I wish you one of my favorite wishes (from the famed Star Wars): “May the Force be with you!”

The American Evaluation Association is celebrating Internal Evaluation (IE) Topical Interest Group Week. The contributions all this week to aea365 come from our IE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Hi, I’m Joe Bauer, the Director of Survey Research & Evaluation in the Statistics & Evaluation Center (SEC) at the American Cancer Society (ACS) in Atlanta, Georgia. I have been working as an internal evaluator at the ACS for almost nine years, in a very challenging, but very rewarding position.

Lesson Learned: Evaluation is always political and you must be aware of those cultural dynamics that are part of every environment. I came to the American Cancer Society to have an impact at a national level. I had envisioned evaluation (and still do) as a means to systematically improve programs to improve the lives of cancer patients.

In the beginning, many were not ‘believers’ in evaluation. The perception was that evaluation could only lead to finding things that were wrong or that were not working – and that this might lead to politically problematic situations. We needed to navigate the cultural mine fields, even as we were acting as change agents. Over time, our Center worked hard to build a sense of trust. As internal evaluators, one must always be aware that we are being judged, as to how nice you are playing in the sandbox, even as we strive and push for higher quality, better data, and better study designs. Evaluators ask the tough questions – which at times cause ‘friction’. However, an internal evaluator must have a comfort level and the confidence with taking that role of asking the tough questions, which can be lonely.

Hot Tips: As an internal evaluator, one must be willing to ‘stay the course’ and ‘weather the storms’ and to never compromise on your values. This is crucially important – because you always need to do the right thing. This does not mean you end up winning all these ‘battles’, because ultimately, you can and are over-ruled on many issues. However, you must keep your integrity – because that is something you need to own throughout your career. That is also what builds trust and credibility.

Rad Resources: The American Evaluation Association’s Guiding Principles for Evaluators http://www.eval.org/p/cm/ld/fid=51 – which are intended to guide the professional practice for evaluators and inform evaluation clients and the general public about the principles they can expect to be upheld by professional evaluators.

The Official Dilbert Website with Scott Adams http://www.dilbert.com/ – where there are many ‘real world’ examples of the cultural dynamics that occur in the world of work and the often absurd scenarios and dynamics that play themselves out. As an evaluator – you will not only need to have a good skill set and work hard at keeping your values and integrity – you will need to have a sense of humor and keep your perspective.

The American Evaluation Association is celebrating Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi – I’m Erik Mason, the Curator of Research at the Longmont Museum and Cultural Center, located in Longmont, Colorado, about 35 miles northwest of Downtown Denver. I am not an evaluator – in fact, the word “evaluation” does not appear in my job description.  I have come to believe, however, that evaluation is critical to the success of my work as a museum curator.  Much of that realization is the result of my participation in the Denver Evaluation Network (DEN), a collection of 15 museums across the Denver metro area that have made a commitment to learn about, and do, evaluation on a regular basis.

Only two members of DEN have full-time evaluators on staff. The rest of us are a mix of educators, exhibit developers, administrators, and curators.  Our daily work is filled with school tours, fundraising, label writing, and all the other stuff that goes into making museums fun and interesting places to visit. As a result, evaluation can get short shrift. We fall back to anecdote and what we think we know.

Over the last two years, the members of DEN have been presenting at museum conferences about the work we are doing to bring evaluation to a broader community.  It has been fascinating watching people who always thought evaluation was something scary and hard, and required a large supply of clipboards, realize that it can be done in many ways.

Within my workplace, I have been pleasantly surprised as we have begun incorporating evaluation into more and more of what we do. Data gathered from iPad surveys provides a baseline understanding of our audience demographics and allows us to compare the changes in our audience as our special exhibits change. Evaluation is now a part of the development of all our exhibits. In the course of doing evaluation, I’ve seen attitudes change from “Why are we wasting our time doing this?” to “When are we doing another evaluation?”

Rad Resource: Check out this video of testimonials from members of DEN.

Hot Tip for Evaluation 2014 Attendees: Denver really is the “Mile High City” and you can take home proof of this fact with a short jaunt and a camera. A free shuttle and brief walk away from the Colorado Convention Center is the Colorado State Capitol building, a Neoclassical building that sits at the eastern end of Denver’s Civic Center Park. The Capitol building sits exactly one mile above sea level, and the official marker can be found on 13th step. The Capitol building is emerging from a multi-year restoration effort with a shiny new coat of gold on its dome, in honor of Colorado’s mining heritage. Free tours of the Colorado Capitol Building are offered Monday-Friday.

We’re looking forward to October and the Evaluation 2014 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to contribute to aea365? Review the contribution guidelines and send your draft post to aea365@eval.org.

<< Latest posts

Older posts >>

Archives

To top