AEA365 | A Tip-a-Day by and for Evaluators

CAT | Uncategorized

Hello loyal readers! I’m Sheila B Robinson, aea365’s Lead Curator and sometimes Saturday contributor with a few tips on creating handouts for your next presentation (#Eval17 perhaps?).

Repeat after me: Slides are not handouts! Slides are NOT handouts! I know, I know…it’s just so easy to print out your slides and give them to workshop participants, team members, or meeting attendees. The trouble is that when a presenter does this, one of two things tend to happen:

  1. The slides are loaded with text (because the presenter wants participants to go home with some key points to review later, a noble intent) and that compromises the effectiveness and success of the presentation. The thing is, according to Nancy Duarte, “An audience can’t listen to your presentation and read detailed, text-heavy slides at the same time (not without missing key parts of your message, anyway).”
  1. The slides are well designed with very little text and instead feature relevant graphics and images such that the slides themselves make little sense when separated from the presenter and presentation.

Condition #1 leaves participants with a set of key points that could have been distributed as a handout with no need for the presentation, while condition #2 leaves participants with a potentially great presentation experience but no easy way to review or remember key points (unless they were taking their own notes).

Hot Tip: Creating a separate presentation handout mitigates both of the above conditions. Here’s one caveat before we continue: Not all presentations require a handout. In fact, not all presentations even require slides! And, it’s certainly feasible to have a “slideless” presentation that does include a handout. The point is to be intentional about whatever resources accompany a presentation. Our Potent Presentations Initiative p2i Messaging tools can help with that aspect of presentation planning.

Rad Resource: So, without further ado…The newest tool in the p2i toolbox is our Guidelines for Handouts, now available on our Presentations Tools and Guidelines page. Use this tool to gain insight and perspective into WHY we use handouts, HOW to create effective handouts, WHAT should be included in a handout, and WHEN to distribute handouts – before, during, or after a presentation. Guidelines for Handouts includes an example of what a presentation handout could look like, and also features loads of Insider Tips and links to additional content.

So, let’s make a deal. I promise to deliver an idea-packed handouts tool, and you agree to stop printing your slides, OK?

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Rodney Hopson, former AEA President and current Program Director (with Brandi Gilbert) of the AEA Graduate Education Diversity Internship (GEDI) Program, which is currently housed in the College of Education and Human Development at George Mason University where I am faculty in the education policy program.

I am excited to welcome colleagues this fall to Evaluation 2017 in Washington, DC, for at least two reasons:

1) The conference theme, From Learning to Action, could not come at a more propitious time in our nation and in our world. The four subthemes: learning to enhance evaluation practices, learning what works and why, learning from others, and learning about evaluation users and uses imply that we evaluators ought to make good use of the lessons we learn in our practice, discipline, and profession. We have plenty of examples in our global and local communities which reveal how intolerance, hate, and bitterness continue to rip at the fibers of our democratic possibilities of equity and social cohesion. If anything, the events of Charlottesville in early August point to how far we have to go. The conference is a call to action in the complex ecologies of our practice where relationships matter; we have a responsibility to act and to find relevance in solving the wicked problems in our practice.

Hot Tip: Find a way to move from learning to action while attending Evaluation 2017. For instance, our local affiliate has ways to become active through Evaluation without Borders, where you can lend a hand to local community-based agencies. Or, find a way to visit your local representative through EvalAction.

2) Washington, DC is a great city to see, rich with ethnically and linguistically diverse neighborhoods and communities with yummy food to eat, places to visit, and people to see!

Just last week, my wife Deborah and I strolled east of the River in the Anacostia Historic District where we visited the Anacostia Community Museum and Cedar Hill, home of the famous abolitionist Frederick Douglass.  African-Americans have an inspiring and proud history in the city that dates back as early as 1800, when they made up 25% of the population according to documents found in publications about the African American Heritage Trail.

Hot Tip: See how many locations you can find on the heritage trail and make a half day of it by visiting several before you leave the city:

  • Take in a show at the Howard Theater,
  • Visit the African American Civil War Memorial and Museum,
  • Check out the city’s first independent black Episcopal church, St. Luke’s, under the leadership of Alexander Crummell, noted missionary, intellectual, and clergyman, and
  • Check the Phyllis Wheatley YWCA, or even sites in Georgetown, the city’s oldest neighborhood.

Come to Evaluation 2017 ready to learn! Get nourished on what the city has to offer and get ready to act as you leave!

We’re looking forward to November and the Evaluation 2017 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to contribute to aea365? Review the contribution guidelines and send your draft post to aea365@eval.org.

·

Hello everyone!  Yvonne M. Watson here.  I’m a long-time member (almost 15 years) of AEA and a doctoral student at The George Washington University’s Trachtenberg School of Public Policy and Public Administration.  I’d like to share a few brief lessons learned on the topic of Evaluation Users and Evaluation Use, one of four focus areas for the 2017 Conference theme Evaluation: From Learning to Action.

Perhaps the greatest thrill of victory and agony of defeat for any evaluator is the use of the evaluation report and findings.  Many of the evaluation field’s pioneers, thought leaders, and emerging practitioners have written extensively on this topic.  Understanding the many facets of use including evaluation users, uses, barriers and the facilitation of greater use can help evaluators strategically invest their time and resources to ensure the evaluation is designed with the intended use and user in mind.  Here are a few things to consider.

Lessons Learned:

Know Your Audience.  Understanding the intended user is critical. Evaluation users can include managers and staff responsible for managing and administering federal, state and local government programs, and non-profit and for profit organizations. Funders, academic researchers, Congressional members and staff, policy makers, citizens groups, and other evaluators are also intended users of evaluations.

Understand How the Evaluation will be Used.   Carol Weiss offered the field four categories of use for evaluation findings.  Instrumental use involves the use of evaluation findings for decision making to influence a specific program or a policy more broadly.  Evaluation findings that generate new ideas and concepts, promote and foster learning about the program is considered conceptual/ enlightenment useExternal influence on other institutions and organizations involves the use of evaluation results by entities outside of the organization that commissioned the evaluation.  Evaluation findings that are used symbolically or politically to “justify preexisting preferences and actions” is considered political use.  The use of evaluation findings for accountability, monitoring and development were introduced by Michael Quinn Patton.

Explore the Potential Barriers to Use.  Barriers might limit the use of the evaluation:  timeliness (results not available when needed to inform decision-making); insufficient resources (lack of resources to implement recommendations); or the absence of a learning culture (culture of continuous learning and program improvement).

Consider Strategies to Facilitate Use.  Design your evaluation with the intended use and user in mind. Michael Quinn Patton introduced the field to Utilization-Focused Evaluation which emphasizes evaluation design that facilitates use by the intended users.  Lastly, clearly communicate evaluation results.  Recently, data visualization has emerged as a strategy to address evaluation use by communicating the research and findings in a way that will help evaluation users and make decisions.

Rad Resources:

Have We Learned Anything New About the Use of Evaluation , Carol Weiss

Utilization-Focused Evaluation , Michael Quinn Patton

AEA Data-Visualization and Reporting Topical Interest Group

We’re looking forward to November and the Evaluation 2017 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to contribute to aea365? Review the contribution guidelines and send your draft post to aea365@eval.org.

·

Hi, I am Teresa Derrick-Mills, a researcher and evaluator at the Urban Institute in DC. I love learning and researching at the intersections of policy and practice, research and translation to practice, and issues or problems that invite a multi-disciplinary or multi-policy area approach. Today, I am here to spark your interest in the Evaluation 2017 Learning from Others Conference Track.

Given the interdisciplinary nature of evaluation, you might be wondering, who is an “other” that I might learn from? Where can I or should I look to expand my evaluation toolbox to generate appropriate evidence in this complex and dynamic world? In this context, I see the “other” through at least 5 dimensions:

  1. Other researchers who don’t identify as evaluators but whose work we can learn from (see conference tip below for some examples)
  2. Other individuals who could be both the subjects of and participants in our research
  3. Other evaluators whose methodological expertise differs from ours
  4. Other evaluators whose cultures differ from ours
  5. Other evaluators whose evaluation environments differ from ours

Hot Tip – For the Conference:

The President’s Strand includes some sessions that have been very intentionally crafted to expand our learning from others toolkit. See session 3517 to learn from feminism, session 2105 to learn from game theory, session 3260 to learn from implementation science, and session 1686 to learn from each other the ways that race and class influence our evaluation designs and findings.

Hot Tip – for the local DC area:

One great place to learn from others is the National Geographic Museum, my personal favorite. You can take the Metro Red Line down to Farragut North. It isn’t one of the free museums, but the vivid, wall-size pictures provide new perspectives to think about the world (and how to study it) in new ways.

We’re looking forward to November and the Evaluation 2017 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to contribute to aea365? Review the contribution guidelines and send your draft post to aea365@eval.org.

·

Greetings! We are Estelle Raimondo, an Evaluation Specialist at the World Bank, and Karol Olejniczak, an Associate Professor at the University of Warsaw. Like most of you we are evaluation nerds and we can’t wait to join thousands of you in DC in November to learn about “what works and why.” We had the opportunity to work with Prof. Newcomer on conceptualizing this year’s conference, so let us tell you how this particular strand came about and give you three “hot tips” for how to join the conversation.

Lessons learned: The theme of “learning what works and why” is primarily a call for collective reflections on what we may call the “learning paradox” that Aristotle eloquently articulated in his time: “the more you know, the more you know you don’t know.” For decades, the evaluation community in its wide diversity has gathered evidence about the effectiveness of a vast array of interventions throughout sectors and contexts. The conference is the perfect arena to deliberate on (1) what we know that we didn’t know, let’s say 10 years ago; (2) missed opportunities for cumulative knowledge; and (2) how we can convey this evidence to policy makers and practitioners.

Hot Tip #1: Even if you are not a methods geek like us, you may want to attend a session on the latest thinking on causal inference. Whether it is through advancement in systems thinking, experiments, or qualitative methods of causal inference, many of us are pushing methodological boundaries to crack the causal nut. For instance, Estelle has used process tracing to assess the impact of engaging citizens on the quality of public services in developing countries. If you are interested, you can join us in November for a demonstration session on the topic.

Rad Resource: A detailed guide on using QCA in evaluations

Hot tip #2: Attend a session that is not strictly in your field. If you are an education expert, why not join a session on what we have learned about effective service delivery in transportation or peace-building?  That way we can test the generalizability of each other’s work by simply talking to one another. We bet you that given the common underlying behavioral and social mechanisms that affect interventions’ successes and failures, we have a lot to learn from each other.

Rad Resource: a professional network working on this

Hot tip #3: Learning what works and why is not useful if it doesn’t make it to the ear of practitioners and decision-makers from different communities. Try to participate in a session that ponders on this issue or learn from other fields, for instance on how to use games to test proposals for new regulations in a safe environment.

Rad Resource: an insightful article on the topic

We’re looking forward to November and the Evaluation 2017 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to contribute to aea365? Review the contribution guidelines and send your draft post to aea365@eval.org.

· ·

This is Susan Tucker, Treasurer of AEA and an independent evaluator (Evaluation & Development Associates LLC). As a member of AEA’s Competencies Task Force, I welcome you all to learn more about the task force’s progress at Evaluation 2017.

Our 16 member task force led by Jean King was charged by AEA’s Board in 2015 to explore and refine a unified set of evaluator professional competencies as a next step in AEA’s continuing commitment to our Ends Goals. Since then, we have been actively soliciting AEA member dialog about appropriate next steps in the professionalization of evaluators.  The ultimate goal is to submit this set of professional competencies for our members to ratify.

What the Competency Task Force has been doing:  Since 2015 we have:

  • completed an international crosswalk of competencies across 20 countries and presented at Evaluation 2015
  • created draft competencies and posted five domains on AEA’s website in February 2016 based on Evaluation 2015 listening post results
  • held another listening post at Evaluation 2016 and the 2016 Summer Institute as well as a standing invitation to anyone who visits AEA’s homepage to email us at competencies@eval.org
  • conducted 15 virtual focus groups in Spring 2016 with members from over 30 TIGs and 5 local affiliates, as well as consulted with leaders from other VOPEs regarding:
    1. adequacy of the proposed domains
    2. sub-domain item-level feedback
    3. missing domains and items
    4. uses for the competencies
    5. concerns and opportunities
  • shared results of the focus groups at AEA’s 2016 annual conference—Areas suggested for further attention included: more attention to use and influence, client capacity building, advocacy, teamwork, defining competency and how we are “different” from related fields, role of international members, and clarifying next steps as certification
  • hosted a weeklong aea365 in December 2016 to share the latest five competency domains and solicit additional input
  • made revisions in the competency domains based on 2017 feedback in the early spring followed by designing an online survey to the whole AEA membership to determine if these competencies are the right ones for AEA. The competency survey was piloted in July 2017 in preparation for a September launch to the membership.

What’s next:  Survey results will be analyzed and shared with the board and general membership at Evaluation 2017. Task force members concur that it will be important to continue the work by creating professional development materials to support evaluators, wherever they work.

Hot Tip:  Consistent with the conference theme of “learning to enhance evaluation practices,” our latest learnings will be shared at Evaluation 2017 via three sessions which we hope you will attend.

We’re looking forward to November and the Evaluation 2017 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to contribute to aea365? Review the contribution guidelines and send your draft post to aea365@eval.org.

· ·

We are Kathryn Newcomer, Director of the Trachtenberg School for Public Policy and Administration at The George Washington University (GWU) and 2017 AEA President, and David Bernstein, CEO of DJB Evaluation Consulting and 2017 AEA Conference Program Co-chair. We have been exploring the relationship between evaluation and learning for over 35 years. David was part of Kathy’s first evaluation cohort at GWU, Kathy was David’s doctoral dissertation advisor, and we have been frequent collaborators, co-authors, and AEA co-presenters.

Evaluation is dependent on learning from each other and putting theory into action. Each learning opportunity presents unique challenges and together, as a community, the 2017 AEA Conference in Washington, DC from November 6 to 11, 2017, will allow us to move beyond these challenges to find solutions to improve our programs and create greater good for society as a whole.

The four conference themes are a way to explore the full lifecycle of an evaluation: learning to enhance evaluation practices; learning what works; learning from others (other evaluations, other professions), and learning about evaluation users and uses. Over the next four days, evaluators who have assisted Kathy with planning the 2017 AEA conference will reflect on each of the subthemes, and provide tips to get the most out of the conference and our host city of Washington, DC. Some blogs will include inside knowledge from members of Washington Evaluators (WE), the local DC area affiliate. We are both enthusiastic Past-Presidents of WE.

Rad Resources: The AEA Conference Program is online. You can see a color coded conference overview at the bottom of the page. The top of the page has a very useful search feature. You can search the conference program by session title, track (Topical Interest Group themes and cross-cutting topics including Presidential Strand sessions), time slot, presenter, and session type. Be sure to look for the keynote sessions and keynote discussions featuring terrific speakers reflecting on different aspects of the conference theme.

Hot Tip: There are some great places to visit in DC before and after the conference. Two of our favorites provide an opportunity to “learn from the animals” and to reflect on what you’ve learned in a beautiful environment. David’s daughters are from China, and when they were younger they enjoyed the Panda statue right outside of the Marriott Wardman Park, the 2017 Conference Headquarters, before a visit to see real pandas at the Smithsonian National Zoo. The Zoo is a short uphill half-mile walk from the Marriott Wardman Park. Want a chance to quietly reflect on what you learned at the AEA Conference? Check out the Hillwood Estate, Museum, and Gardens, a short ride or two mile walk from the Marriott. Reflect on what you learned at the conference, and put your evaluation learning into action by sharing what you learned with others.

We’re looking forward to November and the Evaluation 2017 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to contribute to aea365? Review the contribution guidelines and send your draft post to aea365@eval.org.

·

Greetings everyone of AEA!

I am Natalie DeHart, the Programs Coordinator for AEA. Since joining the team in May, I have been in learning mode, soaking up as much as I can about the profession of evaluation and about our members. I’m passionate about carrying out thoughtful programming to the organization that adds value to our members’ professional lives and I am thrilled that I get to do that every day at AEA. Today, I’m here to talk about biggest thing on all of our minds these days: Evaluation 2017, and I could not be more excited!

One of my favorite aspects of my role is teamwork and there is no bigger team project than an annual conference. One of my responsibilities is working on the Presidential Strand track for Evaluation 2017. I am grateful for the experience to help put this part of the program together and I am excited to see it in action in two short months.  Our President, Kathy Newcomer, and her group of volunteers with whom I have worked have been instrumental in bringing our theme “From Learning to Action” to life. Coordinating their efforts with my staff team has been a thrilling way to dive right in and I already have a few ideas in the works for next year.

Another wonderful thing about Evaluation 2017, is that I will finally be able to meet with members face-to-face. I’ve been getting to know a few of you over the past few months, but there is nothing quite like making an in-person connection.

Feel free to contact me anytime at info@eval.org, and please stop by the information desk at Evaluation 2017 so we can chat! I look forward to seeing you there.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello! I’m Martha A. Brown President of  RJAE Consulting. Lately, an endless stream of conference speakers, blog writers, Indigenous evaluators, and authors have confronted and challenged my “programming” as an evaluator. Traditional evaluation methods place tremendous emphasis on research methods and evaluation theory – but not necessarily on the people we work with and for. At the 2017 Canadian Evaluation Society conference, Nora Roberts told me that the very tools of our profession continue to oppress and silence others. Her statement sent me reeling. Gail Barrington spoke about the value of reflecting upon our work and our methods so we can improve our craft and learn more about ourselves. Indigenous speakers at multiple conferences reminded me that we are all interconnected and that our relationships with ourselves and each other are the most important things in life. All of this can be summed up in one word: love.

Additionally, I research, practice and teach restorative justice, which is grounded in Indigenous values such as interconnectedness, openness, honesty, vulnerability, and respect. I bring these values and restorative practices to my work. However, too many times I have felt like I am “breaking all the rules” that I learned in graduate school as I infuse love into my work and the people I work with.

When I read the invitation to submit a blog on evaluation and labor, the first thing that came to mind was to write about putting love and relationships at the center of our work. What would our work look like if each of us took time at the outset and throughout every evaluation to build trusting relationships with our “stakeholders” and “participants”? Do those of us who are products of Western culture even know how to do this? In a society that values goals, outcomes, and return-on-investment above all else, how can we return to the teachings and the ways of our ancestors and put our relationships at the center of everything we do? We knew this once, but have forgotten.

In AEA, many evaluators are truly committed to changing the world, to improving people’s lives, and to creating more just and equitable ways of doing what we do. But we don’t always know how to live out our goals. That requires us to critically reflect upon what we were taught, how we do our work, and to ask who is being inadvertently silenced, harmed, or oppressed during an evaluation – or in an evaluation classroom. It requires us to love.

Love requires us to engage our whole selves – mind, body, heart and spirit – in our work. We can learn how to do this by studying Indigenous values, practices, and ways of being. I am so grateful to those who helped me wake up, including our own Nicky Bowman.

Rad Resources:

The American Evaluation Association is celebrating Labor Day Week in Evaluation: Honoring the WORK of evaluation. The contributions this week are tributes to the behind the scenes and often underappreciated work evaluators do. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello community of evaluators. I’m Salima Bhimani, Founder and Director of Diversity, Equity and Inclusion at Relational. Relational offers research and evaluation and consulting and education services to organizations, companies and institutions. My focus as an evaluator and researcher over the last two decades have been on addressing bias, discrimination and barriers faced by marginalized people in communities and institutions. Here I share the centrality of getting underneath the language of diversity so that evaluations can reveal how social inequities are designed into institutions and operate within them. 

Case for consideration: Recently I conducted an evaluation for a higher education Institution. They wanted to understand how to make their curriculum and pedagogies more accessible to the linguistic, gender, racial, ethnic and economic diverse constituencies they serve in more than 10 countries. These constituencies all fall under the same religious community. The institution already had a conceptualization of accessibility. Their understanding foregrounded that everyone should be able to obtain their resources and relate to them.  It was clear to me that their approach to accessibility was intimately connected to how they thought about what diversity means. In this circumstance, their benign conception of diversity was obscuring the connection between the social subjectivities of their constituencies and their relative power, voice and positioning in relation to their institution and the broader community.  That is, there was no analysis of the historical and contemporary dynamics of unequal relations between their constituencies that were implicitly and explicitly defining the curricular content and pedagogical approaches. What was required is an awareness of how their approaches and content were already shaped for those unquestionably thought to be the norm.

Hot Tip: Break open taken for granted notions of diversity

  • A benign concept of diversity flattens difference. It undermines and diminishes histories and cultural forces that design inequities within institutions and which relationally shape individual and group identities, positions, interests and needs
  • A more critical conception of diversity understands how people and their experiences are socially and politically constituted in relation to each other, even within a community with a shared identity
  • Such analysis is foundational to a more nuanced conceptualization of what the curriculum and pedagogies need to be and for whom
  • Accessibility then is directly entangled with social realities and the biases, barriers, and inequities experienced differently within social minority groups
  • Accessibility must be framed with a clear view of how social markers of difference intersect to inform experiences of access

Rad Resources:

As I have written before diversity is often used as a ‘safer’ concept within institutions. Yet, those researchers that have examined the limits of diversity as an institutional marker, make an incredibly strong case for why we should understand the function of its uses. We need be cautious and as evaluators ask whether the use of diversity in fact undermines goals towards equity and social justice.

 

The American Evaluation Association is celebrating Labor Day Week in Evaluation: Honoring the WORK of evaluation. The contributions this week are tributes to the behind the scenes and often underappreciated work evaluators do. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Older posts >>

Archives

To top