AEA365 | A Tip-a-Day by and for Evaluators

CAT | Independent Consulting

Hello, I am Matt Feldmann, the principal researcher and owner of Goshen Education Consulting, Inc. (http://www.gosheneducationconsulting.com) and the chair for the Independent Consulting TIG (IC TIG).  My company focuses on educational evaluation and data support for clients in Southern Illinois. I had a great time at the annual conference. While it was exhausting, I also, weirdly, feel refreshed by meeting with all of my colleagues.

This week we are featuring presenters from our Meet the Pros: Intermediate Consulting Skill-Building Self-help Fair conference session. I believe that this annual conference session is our best IC TIG session because it provides immediately useful information, and it often is the gateway for many new independent consultants to become involved with our TIG. The session was particularly useful for our attendees because we had eight tables setup for folks to circulate in a “speed-dating format.” The attendees received information quickly and this was a great opportunity to network with others.

Lesson Learned: The IC TIG is for more than just independent consultants.

Before I go any further, please note that the Independent Consulting TIG is all about good consulting business practices. Many of our topics are relevant beyond small businesses and have ready application to small evaluation shops such as university centers and institutes, internal evaluation practices, and evaluation departments within larger organizations. Our TIG oriented posts this week are relevant to any evaluators who would like to learn better business practice.

Rad Resources:

In case you are interested in some previous blog posts that would be similar to our Meet the Pros topics consider seeing these past AEA 365 Blogs.

Holly Lewandowski on being new to independent consulting

Matthew Von Hendy’s take on searching for RFPs

Judah Viola’s perspective on building capacity to succeed as an independent consultant

My take on focusing on a niche for your practice.

The American Evaluation Association is celebrating Independent Consulting TIG Week with our colleagues in the IC AEA Topical Interest Group. The contributions all this week to aea365 come from our IC TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Sarah Brewer and I am a performance management specialist for Deloitte Consulting LLP. I help federal leaders design and build their performance management capability to improve their ability to lead, manage and communicate program performance.

Over the past several years, the Office of Management and Budget (OMB) has provided guidance to improve the use of administrative data to provide evidence of program performance and guide data driven decision making.  Administrative data is the data government programs collect through the implementation and management of their program activities. Despite these efforts, many program managers have struggled to turn this data into meaningful insights.  As a result program managers continue to implement their programs without using the administrative data they have available.  To get better use of administrative data, program managers should consider investing in three actions:  (1) Defining Meaningful Metrics; (2) Setting Performance Targets; and (3) Conducting Performance reviews.

Lesson Learned: Define meaningful metrics. With specific data capturing the management and implementation of the program implementation, it often becomes overwhelming for program managers to select which of the many possible metrics are the most meaningful.  As a result, program managers need to prioritize metrics that measure activities that they are intentionally trying to change.  For example, if a communications campaign is trying to reach out to the public, it is important that it prioritizes metrics that measure its outreach.

Rad Resource: Check out Deloitte’s one-pager for tips on how to identify specific priority areas and focus on measuring and monitoring those metrics.

Lesson Learned: Set performance targets. Administrative data on its own reflects what the program is doing but not how well the program is doing it.  As a result, program managers need to set targets to define performance.  For example, if the communications campaign reaches one person it achieves its broad goal of ‘outreach” but when the program manager sets the goal of reaching 10,000 people in one month – it turns the metric into a performance target.

Hot Tip: Insights are gleaned from administrative data when program managers set expectations on where they want to be and measure whether or not they have reached the performance target.

Lesson Learned: Conduct performance reviews. Finally, to turn administrative data into insights program managers should conduct quarterly performance reviews that brings the program manager together with the leadership team to discuss the prioritized metrics and performance against their targets.

Hot Tip: Only through conversation can leaders discuss what is working and what is not working and identify ways they can help improve areas of underperformance and capitalize on leading practices.

The American Evaluation Association is Deloitte Consulting LLP’s Program Evaluation Center of Excellence (PE CoE) week. The contributions all this week to aea365 come from PE CoE team members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I’m Meklit Berhan Hailemeskal, an evaluation specialist with Deloitte Consulting LLP. I work with both US federal agencies and global development agencies to plan, design, and implement program evaluation and performance measurement initiatives. Working fluidly across federal and global health programs, I have learned specific lessons and found helpful resources from one side that can readily be useful for the other.  Today I want to share a couple of useful resources from the global health evaluation space that I believe can be valuable for the federal evaluation space.

My work with federal health programs often involves supporting grantees in one form or other in implementing the required performance measurement and evaluation activities. The questions I hear often from grantees is how the data provided to the funding agency are going to be used and how they can be useful to the grantee. While I will not attempt to answer those questions here, I would like to share some examples of learning platforms used to facilitate evaluation data use from the global evaluation space.

Lesson Learned: Use of evaluation results requires an intentional and systematic approach to translate evaluation/performance measurement findings into realistic and meaningful programmatic recommendations, and a mechanism to work with program managers to monitor the implementation of those recommendations.

Rad Resource: The Independent Evaluation Group for the World Bank Group maintains a Management Action Record Database to document and monitor post-evaluation action.  The database lists the key findings and recommendations that emerge from evaluation findings and tracks the progress of implementation of these recommendations at the program level. This database serves as a tool to promote and build accountability for the use of evaluation results.

Lesson Learned: From time to time, it is necessary to reflect in a systematic way on what the impact of evaluation/performance measurement actually is – what have we collectively learned from our evaluation/performance measurement efforts and how has that influenced how we work and what we are able to achieve? Having a systematic process and standardized tools to facilitate this reflection helps hold evaluators and program teams accountable for incorporating evaluation findings and recommendations into program planning and implementation.

Rad Resource: Better Evaluation recently published Evaluations That Make a Difference – a collection of stories from eight countries about how evaluation results (and processes) have been used to influence change within organizations and the lives of people. These stories are great illustrative examples of the meaningful difference that evaluation can bring about when there is intentional and strategic reflection on evaluation results.

While the resources presented are from a global context, I have found that their intent and use continues to inspire and influence domestic evaluations.

The American Evaluation Association is Deloitte Consulting LLP’s Program Evaluation Center of Excellence (PE CoE) week. The contributions all this week to aea365 come from PE CoE team members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I am Jonathan Pearson, a senior manager in Deloitte Consulting LLP’s government performance practice. I support government health programs overseas and right here at home to develop and implement program performance systems.

I’m here to tell you that government programs should be more “selfish.”  Yes, “selfish.”  At least when it comes to the design of their performance frameworks. They put in the work to create frameworks and systems to gather and report data, but rarely do those performance frameworks give back equally. Let’s just say it’s not a symbiotic relationship.

After all, what’s the purpose of government performance systems? To check the box? To be compliant with GPRAMA? Compliance is obviously important, but it’s also pretty low bar given the exciting things performance systems can achieve for government. So why not dream big? Why shouldn’t government expect more of its performance frameworks? Well, they should!

Hot Tip #1: Think about who determines your funding and what they are expecting from your program. What’s the lifeblood of government programs?  Sustainable funding is! Those that hold the purse strings want to know about the impact their investments are making.  So why not incorporate the information needs of funders into government performance systems in the first place? Do some research into congressional inquiries about your programs.  What about senior agency leadership? What do they want to know? Use your performance systems to brag about your programs to specific stakeholders that influence your financing. And give them something to tweet, not only the 200-page evaluation report.

Rad Resource: Search congress.gov or gao.gov to find inquiries and reports about your program.

Hot Tip #2: Empower your programs. What’s just about as important as sustainable funding?  Providing high-quality services! So why not design performance metrics that give program leaders the information they need to measure and improve the performance of their programs? Seems obvious, right? Except sometimes it’s not.  Metrics should identify positive (and negative) outliers and measure achievements across the logic model.  Interview your program managers to see what their critical business questions are and convert their responses into measures.  Empower the program managers!

Hot Tip #3: Avoid confusing implementers on the ground with irrelevant data clutter. You can develop spectacular technical guidance (and you should), but what really sends a clear message about program priorities are performance measures.  Implementers at the state and local level interpret measures as the priority activities for the program. Because programs wouldn’t collect performance measures on something that wasn’t a program priority, right? Right. So let’s use measures to clearly communicate program priorities to implementers and not distract them with irrelevant data clutter.

Voila!

The American Evaluation Association is Deloitte Consulting LLP’s Program Evaluation Center of Excellence (PE CoE) week. The contributions all this week to aea365 come from PE CoE team members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi! I am Jenica Reed, an evaluation specialist with Deloitte Consulting LLP. While working remotely with team members and clients, I’ve learned that communication, interpersonal skills, and other so-called “soft skills” have each played a role in the achievements of our work. The importance of these skills is often left out of trainings.  If you take the time to incorporate these skills into your work, you might be amazed by how much richer your interactions are and how the quality of your data and depth of your understanding may be improved.

Lesson Learned: Interpersonal skills and building rapport matters. Meeting in-person or taking the time to know primary contacts at the start of an engagement helps build rapport that lasts throughout the project. Building these relationships and learning personalities and mannerisms can aid communication and cooperation throughout the rest of the project.

Building rapport early on can:

  • Set the stage for honest discourse, trust and credibility
  • Provide a more careful picture of the context and players
  • Enable access to people and information
  • Create an open atmosphere for questions or concerns, and
  • Alleviate concerns of being tested or judged

Hot Tip: I have found that simple questions such as preferred method of communication (phone, email, even text) and time of day for meetings can go a long way towards improving communications.

Lesson Learned: Credibility can be built through skill and rapport. Push-back often comes from somewhere—is it lack of trust in your skills? Considering the evaluation just a requirement or worse, an audit? Are there unknown pressures or implications from findings that are weighing on their mind?

Some considerations:

  • Identify pressures and stressors they face
  • Discuss “what’s in it for them?” and gather data that is most relevant and able to be fully utilized
  • Identify leading ways to report data that will meet their circumstances and resonate with decision-makers
  • Incorporate contingency planning into the design and data collection—is there other information you may want to know about negative findings that may be uncovered? What could be explored further?

Hot Tip: Realize that valuable information isn’t only provided in formal meetings. Nervous jokes about “I wonder what – will say about this” often have real meaning and may need to be considered in design and data collection decisions. Maybe this is a new stakeholder to include, a potential critic of findings, an unexpected decision-maker. Addressing these issues can improve trust in the process and ultimately the conclusions drawn and recommendations made. There is often increased openness to receiving negative or mixed findings when credibility has been established and rapport built.

The American Evaluation Association is Deloitte Consulting LLP’s Program Evaluation Center of Excellence (PE CoE) week. The contributions all this week to aea365 come from PE CoE team members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I’m Patrick Koeppl, cultural anthropologist, mixed methods scientist, Halloween enthusiast and Managing Director at Deloitte Consulting LLP. Throughout my career, I have found mixed methods are often the leading way to conduct broad evaluation of complex systems and situations. Qualitative approaches like in-depth interviews, focus groups, participant observation, policy reviews, and many others have a place in developing understanding. Determining the validity and reliability of qualitative data collected via mixed methods poses both challenges and opportunities for authentic understanding of complex systems and phenomenon.

Lesson Learned: The science of numbers, statistics, randomized samples and double-blind studies may indeed be described as “hard,” but qualitative approaches are not “soft.” Rather, they are “difficult.”

Practitioners of the “soft sciences” often face criticisms that their endeavors are not scientific. Nay-sayers may claim that qualitative research is somehow illegitimate—and too often anthropologists, sociologists and others hide in the dark, brooding corners of the application of their craft, frustrated that their methods, approaches and findings may not be taken seriously by the “real scientists” who frame the discussion. Qualitative evaluators fall into this trap at their own peril—there is nothing inherently unscientific about qualitative methods and the findings and inferences drawn from qualitative data.

Hot Tip: It is the practitioner, the scientist, who should bring rigor and science to qualitative methods. Set up your approach with rigor by asking yourself:

  • Are the evaluation questions clear?
  • Is the evaluation design congruent with the evaluation questions?
  • How well do findings show meaningful parallelism across data sources?
  • Did coding checks show agreement across interviewers and coders?
  • Do the conclusions ring true, make sense, and seem convincing to the reader?

Lesson Learned: Qualitative data are the source of well grounded, richly descriptive insights and explanations of complex events and occurrences in local contexts. They often lead to serendipitous findings and launch new theoretical integrations.  When reached properly, findings from qualitative data have a quality of authenticity and undeniability (what Stephen Colbert calls “truthiness”).

Hot Tip: Establish scientific rigor to determine reliability and validity in the following ways:

  • Use computer assisted data analysis tools such as ATLAS.ti or NVivo for data analysis
  • Develop a codebook and data collection protocols to improve consistency and dependability
  • Engage in triangulation with complementary methods and data sources to draw converging conclusions

Finally, putting qualitative data results into the context of a story and narrative to convey a concrete, vivid and meaningful result is convincing and compelling to evaluators, policy makers, and practitioners. Such questions and tools warrant the scientific use of qualitative data collection and analysis in the quest for “useful” evaluation.

The American Evaluation Association is Deloitte Consulting LLP’s Program Evaluation Center of Excellence (PE CoE) week. The contributions all this week to aea365 come from PE CoE team members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Jenn Ballentine and I am the President of Highland Nonprofit Consulting, LLC, an independent evaluation consulting firm based in Atlanta, GA.

So you’re thinking about becoming an independent consultant in Atlanta but you’re not sure how to get started. Don’t fear -help is near! The Atlanta Evaluators Consultants Network (AECN) is a newly formed group of local, evaluation consultants with varying backgrounds and areas of expertise. The network meets regularly to share and discuss evaluation and business-related strategies, tips and ideas and to determine how best we can collectively address the major challenges and issues facing the nonprofit and grant making communities here in Atlanta.

Lesson Learned: Atlanta is home to many national, regional and local nonprofits and prominent Foundations and many evaluation firms and independent consultants. Being an independent evaluation consultant in Atlanta can be challenging but rewarding. The key to success is developing strong relationships, working collaboratively to identify shared goals and objectives, continually communicating with clients, and maintaining a flexible and responsive approach.

Hot Tips:

  1. Collaborate with other independent evaluators. While working by yourself is nice, collaborating with other independent consultants allows you to gain new knowledge and skills and can increase your ability to secure additional (and often larger) contracts. Attend local AEA meetings, join the AECN at https://atlantaevaluators.squarespace.com/ and connect with other consultants in your area. Be selective about who you work with – remember your name is on the line!
  2. Take advantage of in-person trainings and professional learning opportunities. While it is much easier in today’s digital age to participate in trainings and seminars via webinar, attending in person can yield networking opportunities and new connections that would otherwise not be realized. So get out of your pajamas and out of the house!
  3. Always think about how you can add value. Identify the challenges facing your potential clients and determine how you can help them address and overcome these issues. Share relevant research and information about funding opportunities you think might be a good fit with your clients. Your clients will appreciate it and may even ask you to serve as the evaluator if funded – woohoo!
  4. Don’t reinvent the wheel. Utilize resources such as those at http://www.georgiaerc.org. Develop and/or find existing templates and tools that you can adapt. This not only saves time but allows for greater continuity in your practice. Work smarter, not harder!

Starting and maintaining your own business takes time, patience and perseverance. The AECN and the Atlanta-area Evaluation Association are great resources for learning, networking and collaborating. Independent consulting is not a solitary practice – connect with others and be prepared to reap the benefits – both personally and professionally!

We’re looking forward to October and the Evaluation 2016 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to contribute to aea365? Review the contribution guidelines and send your draft post to aea365@eval.org.

 

My name is Charmagne Campbell-Patton and I am an independent evaluation consultant. About a year ago, I made the transition from my role as program manager and internal evaluator at an education nonprofit, to an external evaluation consultant. I continued working with my former employer as a client and, in my naiveté, I thought the transition would be relatively straightforward. I figured that since I knew the inner workings of the organization and had strong relationships with most staff members, it would be easy to continue to conduct useful evaluation.

Lessons Learned: My first mistake was failing to recognize and address that as the program manager, I used to be the primary intended user of the evaluation results. When I made the transition to an external consultant, I needed to be much more intentional about designing evaluations that met the needs of the new intended users.

Hot Tip: Be aware of how your position affects use. The personal factor is different in different relationships – internal and external.

Lesson Learned: Process use is different internally and externally. As a staff member, I used to be able to identify opportunities for process use in an ongoing and informal way. As an external consultant, however, I again had to be much more intentional about identifying opportunities and planning for process use.

Hot Tip: External evaluators need to be intentional about seeking opportunities to support evaluative thinking across the organization through more formalized process use.

Cool Trick: One way to engage staff is a reflective practice exercise. Bring staff together to reflect on the question: “What are things you know you should be doing but aren’t?” This question gets people thinking about potential personal barriers to using information. That sets the stage for discussing barriers to evaluation use organizationally. Next identify enabling factors that support and enhance use, and ways to overcome barriers to use.

It’s also worth noting that despite some of the challenges noted above, the transition from internal to external also gave me a new perspective on evaluation use. Once I recognized some of the barriers to use as an external consultant, I was actually able to use my position to promote use more effectively than I did while internal. The added distance gave me some leverage that I lacked as a staff member to call attention to opportunities and challenges to evaluation use across the organization.

Rad Resources: Essentials of Utilization-Focused Evaluation, Michael Quinn Patton, Sage (2012).

Consulting Start-Up and Management, Gail Barrington, Sage (2012).

Using Reflective Practice for Developmental Evaluation, Charmagne Campbell-Patton, AEA365 March 2015.

The American Evaluation Association is celebrating Evaluation Use (Eval Use) Topical Interest Group Week. The contributions all this week to aea365 come from our Eval Use TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Hi, my name is Cheryl Keeton. Throughout my career, I’ve been responsible for program evaluation, review, and success. Most recently I transitioned to independent consulting to focus my energy and passion to the field of evaluation. I want to share my experience as one way to make the transition.

Lessons Learned: Three years before I decided to become an independent evaluator, I began exploring evaluation from the 50,000 foot view. I attended my first AEA Conference to learn about the many ways evaluation is used outside of my field. I wanted to know who is doing evaluation, how are the various approaches different from the way I do things, and how can I use the sessions to help self-evaluate my strengths and weaknesses. The sessions were fascinating and the community of AEA members was very friendly and helpful. I made new friends and began to establish a network of support.

Next I attended an AEA Summer Institute for in-depth learning and practice. I knew I had a firm foundation but the summer study program allowed me to build and grow, extending my understanding, and learning techniques that were new to me.

Since those initial steps, I reached out to resources around me to help establish my independent consulting. Gail Barrington gave me the best advice for how to begin when I met her at an AEA conference “do it now while you are still working.” Before making the transition, I read Dr. Barrington’s book– Consulting Start-Up and Management: A Guide for Evaluators and Applied Researchers. I got advice from the career center at the local community college and created a web presence. Dr. Barrington’s book has been the best investment and reference for me as the process unfolds.

I reached out to the evaluation community through AEA and my regional organization, volunteering on the local and national level and taking advantage of training such as Ann K. Emery’s Data Visualization workshop. Her blog and resources are amazing. I also follow Sheila Robinson, AEA365 Tip-a-Day by and for Evaluators, and advice on Potent Presentations, p2i.

I found that knowing what you are good at helps to provide direction as you begin. Fields of experience help me to narrow the scope so I know what projects to consider and where to place my energies for marketing. Gail Barrington outlines this in her book very well.

My experience transitioning from in-house evaluation to independent evaluation and consulting has confirmed for me that membership in AEA is essential to provide the big picture and grounding in principles, training is imperative to stay current, and connecting with others in the field is invaluable.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello, I am Matt Feldmann, the principal researcher and owner of Goshen Education Consulting, Inc. and the chair-elect for the Independent Consulting TIG. My company focuses on educational evaluation and data support for clients in Southern Illinois. As an independent consultant, it is imperative to maintain a strong network of clients, colleagues, and advocates with whom you can grow and develop your business. The following are some of the “Soft skills” associated with working with clients that are frequently referenced in business literature.

Lessons Learned:

Form a business strategy and stick to it. Marriage of Skill, Enjoyment, and Resources. Jim Collins refers to this as the “flywheel concept” in his book, “Good to Great”. After you have developed a well-considered business plan there is an additive effect to unyielding discipline to that plan. Learn more about the flywheel concept from Jim Collins here.

Recognize you are in a service industry and focus on your client relationships. Harry Beckwith refocuses attention on the service industry in Selling the Invisible. The key point is that evaluation is not a commodity. From our client’s perspectives, our expertise with complicated evaluation approaches is secondary to our ability to communicate and relate the importance of our work. Jeffrey Gitomer says that you are more likely to receive your next consulting contract from an existing client in his book Little Red Book on Selling.

Practice networking Karma. Recognize that your success is connected to the success of your network of clients, colleagues, and advocates. Your selfless work for others will return to you in unexpected ways. Keith Farazzi says you should not keep score with your networking relationships in his book Never Eat Alone; business development is not a zero sum game. Because we evaluation continues to be in a growth mode, evaluators should reach out to their competitors to learn from one another and to seek ways to develop cooperatively.

Rad Resources:

The following are the author websites for the four books referenced above and that provide excellent understanding for these and several more “soft skills.” You can probably find these on audio CD, mp3, or download from your library and listen to them as you travel among your clients.

The American Evaluation Association is celebrating IC TIG Week with our colleagues in the Independent Consulting Topical Interest Group. The contributions all this week to aea365 come from our IC TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

<< Latest posts

Older posts >>

Archives

To top