AEA365 | A Tip-a-Day by and for Evaluators

Hello loyal readers! I’m Sheila B Robinson, aea365’s Lead Curator and sometimes Saturday contributor with a few tips on creating handouts for your next presentation (#Eval17 perhaps?).

Repeat after me: Slides are not handouts! Slides are NOT handouts! I know, I know…it’s just so easy to print out your slides and give them to workshop participants, team members, or meeting attendees. The trouble is that when a presenter does this, one of two things tend to happen:

  1. The slides are loaded with text (because the presenter wants participants to go home with some key points to review later, a noble intent) and that compromises the effectiveness and success of the presentation. The thing is, according to Nancy Duarte, “An audience can’t listen to your presentation and read detailed, text-heavy slides at the same time (not without missing key parts of your message, anyway).”
  1. The slides are well designed with very little text and instead feature relevant graphics and images such that the slides themselves make little sense when separated from the presenter and presentation.

Condition #1 leaves participants with a set of key points that could have been distributed as a handout with no need for the presentation, while condition #2 leaves participants with a potentially great presentation experience but no easy way to review or remember key points (unless they were taking their own notes).

Hot Tip: Creating a separate presentation handout mitigates both of the above conditions. Here’s one caveat before we continue: Not all presentations require a handout. In fact, not all presentations even require slides! And, it’s certainly feasible to have a “slideless” presentation that does include a handout. The point is to be intentional about whatever resources accompany a presentation. Our Potent Presentations Initiative p2i Messaging tools can help with that aspect of presentation planning.

Rad Resource: So, without further ado…The newest tool in the p2i toolbox is our Guidelines for Handouts, now available on our Presentations Tools and Guidelines page. Use this tool to gain insight and perspective into WHY we use handouts, HOW to create effective handouts, WHAT should be included in a handout, and WHEN to distribute handouts – before, during, or after a presentation. Guidelines for Handouts includes an example of what a presentation handout could look like, and also features loads of Insider Tips and links to additional content.

So, let’s make a deal. I promise to deliver an idea-packed handouts tool, and you agree to stop printing your slides, OK?

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello! I’m Carrie Petrucci, MSW, Ph.D. I came to evaluation by doing it before I knew I was doing it. Truth be told, my mixed methods dissertation was indeed an “evaluation”. However, I discovered early on that I didn’t dare call it that because evaluation research was “poo-pooed” at most research universities (at that time, anyway). As an example, in my ethnography class, I’ll never understand why another student’s work doing ethnographic observation in a “laundree-mat”, as he called it, was somehow more valued by the instructor than my court observations, but there it was. I had earned my Masters in Social Welfare prior to going back to school for my Ph.D., and had worked as a child protective services worker and as a program director in community corrections. So there was no escaping the practical leanings of my work. My Ph.D. was in Social Welfare with a self-imposed minor in criminal justice. For both my MSW (1991-1993) and my Ph.D. (1998-2002), very few people understood why I was combining these two disciplines, but to me the answer was simple: that’s where our clients were (in jail or prison). Sadly, the statistics bear this out, then and now. Early on, I found some common ground of combining social welfare and criminal justice in scholars such as Michael Tonry, Norval Morris, David Wexler, Bruce Winick, Richard Enos, Joan Petersilia and Al Roberts. Later there would be many more. So what’s the point of this story?

Hot Tips:

  • First, be passionate about your work, and don’t be dissuaded by others who may not share your point-of-view. My interdisciplinary approach was not “in vogue” at the time that I initially pursued it, but has since become highly valued.
  • Second, find mentors who share your passion, or at least parts of it. I was incredibly fortunate to have an MSW research advisor and a dissertation committee that stood by my interdisciplinary approach. And it remains very much a part of my work almost 20 years later.
  • Third, trust your instincts, but also come to understand why you do what you do, and the evidence that supports it, but also explore the reasons against it. What other scholars and experts in the field share your view? What evidence do they provide? What about the “naysayers” on how you do what you do? Learn from all of them.
  • Finally, as a contracted evaluator, it may take a few years, but work to get to a place in which you’re only taking on projects that matter to you. The level of detail in this work is overwhelming, and in my opinion, the best way to maintain high standards is to care about what we do.

One last point – caring about what we do doesn’t mean we lack objectivity – but that’s another blog.

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our SW TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Greetings!  My name is Jennifer Lyons, MSW from Lyons Visualization, LLC.  I am a social worker, data designer, and speaker.  In my independent consulting business, I bring creative energy to making data intriguing and impactful, while helping clients transform the way they communicate their story. Today I want to talk about a method I like to use that is effective in both engaging clients in the interpretation of data, while also setting the stage for an impactful visual summary of findings.

In this post, I am going to focus on a process to use after data is collected and analyzed.  After analysis, it is time to dive in and highlight the story within the data.  Part of storytelling with data is making meaning of the information in context. Our clients are the experts on the delivery of their programs, people they work with, and the reporting context.  It is important to include our clients in thoughtful interpretation of their data.  In this post, I am going to focus on using a worksheet to guide a data interpretation meeting and transform findings into a visual summary.

Hot tip: Start by designing the data interpretation worksheet.  This worksheet is the backbone to a visual executive summary of your findings. Below is an example of a simple data interpretation worksheet made for an evaluation of an after-school reading program.  Included are graphic displays of the data with blank boxes that give space for clients to add their interpterion. During the data interpretation meeting, you can use this worksheet to partner with clients to highlight and frame central findings in the data.

Hot Tip: Paste each graph from the worksheet on an empty slide and ask your clients to examine each data point.  Prompt them with questions about what they see as positive, negative, and surprising about the findings.  It is also important to ask your clients to think of relevant context.  As a group, process everyone’s recommendations and thoughts.  There are often a lot of important things being shown in one graph, but together, you can decide on what is most important.  Then, write the most important takeaway/s from the graph in the graph title.  This process is repeated for each graph.  By the end, you will have something like this:

Hot Tip: This completed worksheet can easily be transformed into a visual summary of your findings.  For this worksheet to transition to a visual executive summary, there are key aspects missing. Add effective titles from the worksheet, use color to showcase your story, and add an engaging visual.

Ta-da!  You have a nice visual report based on thoughtful data interpretation using your client’s feedback and expertise.  My hope is that by reading this post, you are more inspired to think of new ways to engage your client in the data and visually display findings.

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our SW TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi all! My name is Rachel Vinciguerra and I am a master’s student of international development and social work at the University of Pittsburgh. This summer I worked on two program evaluations in Haiti: one a mid-point evaluation for a girls’ empowerment program, the other a preliminary design of M&E protocols for an emerging foster care agency. Coming from a social work background, and as an American evaluator working in Haiti, it was especially important to me the studies were culturally-responsive and took marginalized groups into consideration as major stakeholders.

Ultimately, it came down to sharing power with these groups throughout the evaluation process. I found that, when we put them at the center of design, implementation, and presentation, results were richer.

Hot Tip #1: Identify marginalized groups.

  • There are two pieces to this. First, you have to begin with considerable knowledge of the culture and community in which you are working in order to understand specific and, often complex, hierarchies of power. Second, you have to allow that structural knowledge to contextualize your early conversations with stakeholders in order to identify those groups in the program whose voices are not often heard.

Hot Tip #2: Engage marginalized groups on the same level as your organizational client.

  • Consider how you engage your organizational client as you plan for evaluation. Are they telling you what questions they want answered? Are you working with them to develop a theory of change model? Are you collaborating on the timeline of the evaluation? Now consider the marginalized groups in your evaluation and share power in the same way with them. They may be beneficiaries of the program, but they may also be groups within the organization that hired you.

Hot Tip #3: Ensure evaluation results can be understood by all involved.

  • It is research 101. Human subjects deserve access to the knowledge and research they help generate and you can make sure they get it. In the evaluations I worked on, this meant translating all reporting into Haitian Creole and communicating the results in the same diverse modalities I had for my client.

Lessons Learned:

  • Be patient. Be flexible. Be humble. Make and maintain space in your design to be responsive to marginalized groups and be ready to adapt quickly and with humility as needed.

Rad Resources:

 

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our SW TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! My name is Jennifer Obinna and I am Research and Evaluation Director at The Improve Group, a consulting firm in Minnesota. Our evaluation practice strongly supports and encourages evaluation capacity development with our clients. One of the various ways we do so is to facilitate logic model clinics as a way to assist clients and their stakeholders articulate the desired state of their program implementation. As I often serve as an Empowerment Evaluator for sexual and domestic violence prevention efforts, I will highlight resources from that work that can be applied to any sector’s evaluation.

Social work’s person-in-environment perspective ecologically considers an individual and individual behavior within the environmental context in which that person lives and acts. In our logic model clinics, we borrow the social ecological model from public health to think through the person-in-environment as a program planning tool.

Centers for Disease Control and Prevention. Sexual violence prevention: beginning the dialogue. Atlanta, GA: Centers for Disease Control and Prevention; 2004.

We ask stakeholders a series of questions that informs the logical framework for their program in its fully implemented and desired state.

  • What assumptions—beliefs about the program, the people involved, the context, and the way the program works—do we hold?
  • What resources do we have to support this?
  • What activity components do we do that lead to outcomes?
  • Who do we reach/influence?
  • What do we produce/deliver?

For individual and relationship-level outcomes, we ask:

  • What changes in learning, knowledge, attitude, skills, or understanding do we see?
  • What changes in behavior, practices, or decisions do we see?

For community and societal level-outcomes, we ask:

  • What changes to institutions do we see?
  • What changes in condition in the community do we see?
  • What changes in social norms do we see?

To dive deeper into outcomes, we use the ABCDE method of writing outcomes. Example: “By the end of the program, 80 percent of program participants will be able to list two or more positive ways to communicate with peers evidenced by the results of a pre-test/post-test survey activity.”

Rad Resource: The Ohio Domestic Violence Network Primary Prevention of Sexual and Intimate Partner Violence Empowerment Evaluation Toolkit.

Hot Tip: Stakeholders are not always enthusiastic about using paper or electronic pre-/post-tests.  Therefore, we encourage “activity-based assessments,” a method that integrates evaluation into the program experience or educational curricula.

The person-in-environment perspective calls on us to make sure our logic models include “external Influences” that articulate factors outside of the implementer’s control (positive or negative) that may influence the outcomes and impact of the program/project.

Cool Trick: The clinic needs be 2 to 3 hours long per program. We convene two or three program designers/implementers in a room with a 12-by-6 foot portable sticky wall (nylon sprayed with spray mount) to facilitate laying out the logic model in a linear way together. Once documented, we encourage program staff to put the logic model elements into a diagram, shape, or symbol that resonates for them and their stakeholders. See page 15 of the Discovery Dating DELTA Evaluation Report for an example.

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our SW TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Some thoughts on teaching evaluation to social work students… I’m Brandon W. Youker, social worker, evaluator, and professor at Grand Valley State University in Grand Rapids, Michigan. After a dozen plus years in higher ed., I’ve developed a commitment to community-based learning (CBL) as my primary pedagogy for teaching future social workers about program evaluation. I’ve had students from numerous program evaluation courses divided into smaller evaluation teams and asked them to design, conduct, and report on evaluations for local non-profit organizations and programs.

The benefits to students include learning by doing or experiential learning and honing their tools for thought as evaluation is one of the highest order thinking skills according to Bloom’s Taxonomy. Students also report enjoying the realism of the course and evaluation projects as they work in real environments, with real programs that make real impact on real people. Lastly, students not only learn about evaluation but they also learn through serving some of the community’s most vulnerable and disenfranchised populations.

The organizations and programs benefit by receiving high quality, independent, pro bono evaluation and evaluation consulting. The evaluation projects have led to enhancing organizations’ evaluation capacity through thinking more deeply and intentionally about evaluation and program and consumer outcomes, and they receive the student-created data collection instruments that they can use or adapt for use.

It’s important to collaborate with the organizations to develop multi-semester, multi-course evaluation strategies as well as for creating relevant lectures and meaningful assignments. In terms of scholarship, partnerships have led to presentations at academic conferences and journal publications. These evaluation projects allow me to serve my community, which consequently serves the university and the social work profession while building relationships with the local community.

Yes, there are obstacles to overcome. Nevertheless, the potential benefits clearly outweigh the effort for the students, community partners, and instructors. Besides, there are numerous CBL resources for course instructors.

I believe that evaluation is a social work tool for social justice. Thus, it is incumbent upon educators to encourage and support realistic and practical CBL experiences, which will ultimately lead to competent social workers who support sound evaluation and evidence-based practices and programs.

Hot Tips:

Most colleges and universities have CBL resources, guidelines, and policies to assist instructors (see the Association of American Colleges & Universities who lists CBL as one of ten high-impact educational practices [https://www.aacu.org/leap/hips]).

Rad Resources:

There is robust literature on CBL and service learning—the benefits and obstacles as well as suggestions for implementation; and there are a few articles discussing CBL with program evaluation courses, in specific. Newcomer (1985) provides a call to action for CBL pedagogy in program evaluation courses, while Oliver, Casiraghi, Henderson, Brooks, and Mulsow (2008) describe various evaluation pedagogies. Shannon, Kim, and Robinson (2012) discuss CBL for teaching evaluation and offer practical suggestions for doing so; and Campbell (2012) provides a guide for implementing CBL in social work courses.

Thanks for your interest and please contact me to discuss CBL further.

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our SW TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I’m Nicole Clark, licensed social worker and owner of Nicole Clark Consulting, where I partner with community-based groups, local/national organizations, schools and more to design, implement, and evaluate programs and services geared toward women and girls of color.

In 2016, I shared on AEA365 why it matters that evaluators know the difference between the types of social workers we engage with. Today, I’m discussing ethics, its role in social work, and how it all aligns with AEA’s Guiding Principles for Evaluators.

Rad Resource: The first edition of the National Association of Social Workers’ Code of Ethics was approved on October 13, 1960. Since its last revision in 2008, The Code of Ethics has become the standard for social workers throughout the field, in many organizations and for state social work licensing laws.

Hot Tip: Section 5 of The NASW Code of Ethics focuses on social workers’ ethical responsibilities to the profession. In section 5.02 of the Code of Ethics (titled “Evaluation and Research”), social workers engaged in evaluation should:

  • Monitor and evaluate policies, implementation of programs, and practice interventions
  • Promote and facilitate evaluation and research to contribute to the development of knowledge
  • Critically examine and keep current with emerging knowledge relevant to social work and fully use evaluation and research evidence in their professional practice
  • Follow guidelines developed for the protection of evaluation and research participants
  • Obtain written informed consent or assent from participants/guardians, disclosing the nature, extent, and possible benefits and risks associated with evaluation and research (as well as inform participants of their right to withdrawal from an evaluation at any time without penalty)
  • Take appropriate steps to ensure that participants have access to appropriate supportive services, and take appropriate measures to protect participants from mental distress or unwarranted physical harm during an evaluation or study
  • Only discussed information related to an evaluation or study with appropriate individuals professionally related to the evaluation
  • Accurately reporting findings and take steps to correct errors found in published data using standard procedures, and ensure the confidentiality of program and study participants when reporting findings and research results
  • Educate themselves, students, and colleagues about responsible evaluation and research practices.

Lesson Learned: The NASW Code of Ethics aligns with AEA’s Guiding Principles for Evaluators as both the Code of Ethics and the Guiding Principles serve as cornerstones for sound, responsible, ethical behavior for social work evaluators. Both focus heavily on the client-professional relationship by highlighting the dignity of our clients and the overall societal contribution of evaluation. AEA’s Guiding Principles, however, takes up where the Code of Ethics leaves off by adding greater emphasis on stakeholder engagement in the promotion of collaborative inquiry, equity, and cultural responsiveness related to race, gender, religion, age, and more.

What better profession for social workers to be aligned with than evaluation?

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our SW TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Rodney Hopson, former AEA President and current Program Director (with Brandi Gilbert) of the AEA Graduate Education Diversity Internship (GEDI) Program, which is currently housed in the College of Education and Human Development at George Mason University where I am faculty in the education policy program.

I am excited to welcome colleagues this fall to Evaluation 2017 in Washington, DC, for at least two reasons:

1) The conference theme, From Learning to Action, could not come at a more propitious time in our nation and in our world. The four subthemes: learning to enhance evaluation practices, learning what works and why, learning from others, and learning about evaluation users and uses imply that we evaluators ought to make good use of the lessons we learn in our practice, discipline, and profession. We have plenty of examples in our global and local communities which reveal how intolerance, hate, and bitterness continue to rip at the fibers of our democratic possibilities of equity and social cohesion. If anything, the events of Charlottesville in early August point to how far we have to go. The conference is a call to action in the complex ecologies of our practice where relationships matter; we have a responsibility to act and to find relevance in solving the wicked problems in our practice.

Hot Tip: Find a way to move from learning to action while attending Evaluation 2017. For instance, our local affiliate has ways to become active through Evaluation without Borders, where you can lend a hand to local community-based agencies. Or, find a way to visit your local representative through EvalAction.

2) Washington, DC is a great city to see, rich with ethnically and linguistically diverse neighborhoods and communities with yummy food to eat, places to visit, and people to see!

Just last week, my wife Deborah and I strolled east of the River in the Anacostia Historic District where we visited the Anacostia Community Museum and Cedar Hill, home of the famous abolitionist Frederick Douglass.  African-Americans have an inspiring and proud history in the city that dates back as early as 1800, when they made up 25% of the population according to documents found in publications about the African American Heritage Trail.

Hot Tip: See how many locations you can find on the heritage trail and make a half day of it by visiting several before you leave the city:

  • Take in a show at the Howard Theater,
  • Visit the African American Civil War Memorial and Museum,
  • Check out the city’s first independent black Episcopal church, St. Luke’s, under the leadership of Alexander Crummell, noted missionary, intellectual, and clergyman, and
  • Check the Phyllis Wheatley YWCA, or even sites in Georgetown, the city’s oldest neighborhood.

Come to Evaluation 2017 ready to learn! Get nourished on what the city has to offer and get ready to act as you leave!

We’re looking forward to November and the Evaluation 2017 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to contribute to aea365? Review the contribution guidelines and send your draft post to aea365@eval.org.

·

Hello everyone!  Yvonne M. Watson here.  I’m a long-time member (almost 15 years) of AEA and a doctoral student at The George Washington University’s Trachtenberg School of Public Policy and Public Administration.  I’d like to share a few brief lessons learned on the topic of Evaluation Users and Evaluation Use, one of four focus areas for the 2017 Conference theme Evaluation: From Learning to Action.

Perhaps the greatest thrill of victory and agony of defeat for any evaluator is the use of the evaluation report and findings.  Many of the evaluation field’s pioneers, thought leaders, and emerging practitioners have written extensively on this topic.  Understanding the many facets of use including evaluation users, uses, barriers and the facilitation of greater use can help evaluators strategically invest their time and resources to ensure the evaluation is designed with the intended use and user in mind.  Here are a few things to consider.

Lessons Learned:

Know Your Audience.  Understanding the intended user is critical. Evaluation users can include managers and staff responsible for managing and administering federal, state and local government programs, and non-profit and for profit organizations. Funders, academic researchers, Congressional members and staff, policy makers, citizens groups, and other evaluators are also intended users of evaluations.

Understand How the Evaluation will be Used.   Carol Weiss offered the field four categories of use for evaluation findings.  Instrumental use involves the use of evaluation findings for decision making to influence a specific program or a policy more broadly.  Evaluation findings that generate new ideas and concepts, promote and foster learning about the program is considered conceptual/ enlightenment useExternal influence on other institutions and organizations involves the use of evaluation results by entities outside of the organization that commissioned the evaluation.  Evaluation findings that are used symbolically or politically to “justify preexisting preferences and actions” is considered political use.  The use of evaluation findings for accountability, monitoring and development were introduced by Michael Quinn Patton.

Explore the Potential Barriers to Use.  Barriers might limit the use of the evaluation:  timeliness (results not available when needed to inform decision-making); insufficient resources (lack of resources to implement recommendations); or the absence of a learning culture (culture of continuous learning and program improvement).

Consider Strategies to Facilitate Use.  Design your evaluation with the intended use and user in mind. Michael Quinn Patton introduced the field to Utilization-Focused Evaluation which emphasizes evaluation design that facilitates use by the intended users.  Lastly, clearly communicate evaluation results.  Recently, data visualization has emerged as a strategy to address evaluation use by communicating the research and findings in a way that will help evaluation users and make decisions.

Rad Resources:

Have We Learned Anything New About the Use of Evaluation , Carol Weiss

Utilization-Focused Evaluation , Michael Quinn Patton

AEA Data-Visualization and Reporting Topical Interest Group

We’re looking forward to November and the Evaluation 2017 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to contribute to aea365? Review the contribution guidelines and send your draft post to aea365@eval.org.

·

Hi, I am Teresa Derrick-Mills, a researcher and evaluator at the Urban Institute in DC. I love learning and researching at the intersections of policy and practice, research and translation to practice, and issues or problems that invite a multi-disciplinary or multi-policy area approach. Today, I am here to spark your interest in the Evaluation 2017 Learning from Others Conference Track.

Given the interdisciplinary nature of evaluation, you might be wondering, who is an “other” that I might learn from? Where can I or should I look to expand my evaluation toolbox to generate appropriate evidence in this complex and dynamic world? In this context, I see the “other” through at least 5 dimensions:

  1. Other researchers who don’t identify as evaluators but whose work we can learn from (see conference tip below for some examples)
  2. Other individuals who could be both the subjects of and participants in our research
  3. Other evaluators whose methodological expertise differs from ours
  4. Other evaluators whose cultures differ from ours
  5. Other evaluators whose evaluation environments differ from ours

Hot Tip – For the Conference:

The President’s Strand includes some sessions that have been very intentionally crafted to expand our learning from others toolkit. See session 3517 to learn from feminism, session 2105 to learn from game theory, session 3260 to learn from implementation science, and session 1686 to learn from each other the ways that race and class influence our evaluation designs and findings.

Hot Tip – for the local DC area:

One great place to learn from others is the National Geographic Museum, my personal favorite. You can take the Metro Red Line down to Farragut North. It isn’t one of the free museums, but the vivid, wall-size pictures provide new perspectives to think about the world (and how to study it) in new ways.

We’re looking forward to November and the Evaluation 2017 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to contribute to aea365? Review the contribution guidelines and send your draft post to aea365@eval.org.

·

Older posts >>

Archives

To top