AEA365 | A Tip-a-Day by and for Evaluators

Collier1

Hello AEA365!  I’m Paul Collier. Over the last two years I worked as the Data and Evaluation Manager at the San Francisco Child Abuse Prevention Center (SFCAPC), a mid-size nonprofit focused on ending child abuse in San Francisco. At SFCAPC, I worked with all our 50+ staff members to help them use our data to serve clients better.

Rad Resource: Early in my time at SFCAPC, I read a book that changed how I approach my work:

I believe great people to be those who know how they got to where they are, and what they need to do to go where they’re going. They go to work on their lives, not just in their lives… They compare what they’ve done with what they intend to do. And when there’s disparity between the two, they don’t wait very long to make up the difference. – Michael E Gerber, The E-Myth Revisited

Reading Gerber’s book convinced me on the importance of systems and habits to help people succeed at their jobs. SFCAPC had a database that held client information – Efforts to Outcomes – but had few habits to improve that system and use that data to serve our clients better. So, I set out to create habits for how we do our “data” work.

Rad Resources: I read through books, blogs, and web sites, and talked to mentors and friends to get a sense for what other organizations do. I found many resources that shaped my approach at SFCAPC – some of the best are:

  • The books, whitepapers, and newsletter from the Leap of Reason Institute – The free e-books on this site by Mario Moreno and David Hunter, and the institute’s recent whitepaper, The Performance Imperative, are the resources I recommend most often to others.
  • Sheri Cheney Jones’ book Impact and Excellence – Jones’ book contains many useful strategies she uses in her consulting practice to help clients generating insights from their data when resources are scarce.
  • The Root Cause Blog – Root Cause is another consulting firm that supports nonprofits in creating effective data and evaluation habits; they share some of their tricks on their blog.
  • The Data Analysts for Social Good professional association – Members get access to 25+ webinars ranging on topics from justifying the return on investment of data analysis to introductory analytical techniques using Excel, R, and other platforms.

While these resources were useful, I still struggled to find examples of data and evaluation habits. Tomorrow, I’ll share six specific habits we developed at SFCAPC to manage our “data” function and provide consistent value to our staff.

Image via

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi! I’m Neha Sharma, I work with the World Bank and we host the CLEAR Initiative – Centers for Learning on Evaluation and Results. CLEAR is a global partnership program that aims to build M&E capacity through Regional Centers. CLEAR Regional Centers work with a diverse set of clients (government, civil society, international donor agencies, and private sector) to increase awareness and knowledge about how to use evidence in decision making.

Lessons Learned:

Through my work over the last few years, I’ve been learning more about the concept of science of delivery, largely related to issues around implementation. To understand this concept, McKinsey framed it as follows: “Delivery is both an art and a science. We think the art is in the innovation and adaptability of the actors and different delivery models, while the science lies in replicating and scaling those models. The needs are great—but so are the opportunities and the resources that we can mobilize if we all work together.”

Some problems are easy to solve or improve, but many are harder. Of critical importance to our work in evaluation capacity development is capturing complex delivery challenges that are influenced by ground realities. High quality services, sound technical solutions, and good intentions are not sufficient to increase the uptake of evidence. The political context, institutional structures, resource constraints, entrenched behaviors shape the outcome of a capacity development intervention. We use many capacity development delivery modalities – training, advisory, roundtables, research papers, etc. – that have to take into account these complexities.

In order to capture lessons from our work, we M&E professionals can take inspiration from the field of delivery science. If you are interested in delivery challenges, I’m providing some resources to inspire and inform you.

Rad Resources:

  1. Voices on Society: The art and science of delivery (McKinsey) features a variety of experts who share their perspectives on important social issues and delivery.
  2. Global Delivery Initiative at the World Bank is a collaborative effort across the international development community to work together to capture delivery lessons and support practitioners to use this knowledge.
  3. Escaping Capability Traps through Problem-Driven Iterative Adaptation (PDIA) – from the Center for Global Development and by Matt Andrews, Lant Pritchett, and Michael Woolcock – suggests a four core principle approach for reform initiatives.
  4. Implementation Science is a journal that publishes research on the scientific study of methods that promote the uptake of research findings in healthcare (clinical, organizational or policy contexts).
  5. Nick Milton’s The Lessons Learned Handbook provides practical approaches to learning from experiences.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Hi. We’re Sherry Campanelli, Program Compliance Manager and Laura Newhall, Clinical Training Coordinator, from the University of Massachusetts Medical School’s Disability Evaluation Services (DES). As an organization, we are committed to following Lean principles to enhance our services for people with disabilities. Lean principles assist organizations to maximize value while minimizing waste. These principles can be used internally as we evaluate our own working processes, as well as externally in working through evaluations with external partners (as demonstrated in some of our previous blogs showcasing QI initiatives).

One Lean tool is the Idea Board which is utilized to eliminate waste and enhance efficiency by tapping employee creativity and knowledge, and engaging employees in problem solving. An Idea Board is a simple and efficient strategy for gathering and acting on employee suggestions to improve organizational processes. It provides a visible format to display and track ideas from inception to disposition.

An Idea Board is typically accompanied by a regular structured opportunity for staff to “huddle” to present, discuss, and plan for implementation of ideas. The Idea Board is a place to document ideas, assign tasks, check on progress, and record outcomes. Staff must provide their ideas in writing by briefly naming the problem and suggesting a possible solution.

Campanelli 1

Campanelli 2

Advantages of an Idea Board:

  • Builds synergistic solutions; employees who do the work have the most knowledge of where problems lie and how to do things better.
  • Enhances communication among employees and builds morale.
  • Empowers staff to be problem solvers in addition to problem identifiers.
  • Provides a format for employees at all levels to give input into process improvement.

 

Lessons Learned:

  • Provide Lean orientation for all Idea Board participants prior to implementation.
  • Meet on a regular basis to discuss ideas; keep meetings short and focused (5-30 minutes maximum).
  • Include employees with common functions and responsibilities.
  • Establish ground rules at the first meeting that are agreed upon by all participants.
  • Ask for volunteers to facilitate and act as a buddy for each huddle with the goal of rotating this role among all members. The facilitator leads the huddle and the buddy supports the facilitator by documenting decisions/progress on the Idea Board.
  • Begin huddles by encouraging members to share successes; provide opportunities for all members to participate.
  • Supportive managers are key to successful implementation of ideas.
  • Ensure that ideas that are implemented result in overall efficiencies and not just passing a problem to another department.

Rad Resources:

Invest a few moments in thinking. It will pay good interest.” ~Anonymous

“Your lean process should be a lean process.” ~Author Unknown

“There is nothing so useless as doing efficiently that which should not be done at all.” ~ Peter F. Drucker

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello. I’m Ron Dickson. After 23 years in a global business with more than 100K employees around the globe, I recently retired and joined a local nonprofit with fewer than 20 employees. While my role in each organization was similar (measurement and evaluation), I’ve already noticed some key differences. My experiences may help my evaluation colleagues in the nonprofit sector understand how better to work with their corporate partners.

Lesson Learned: It’s about the client’s needs, not your brilliance.

You may have gotten your degree exploring the differences among measurement, evaluation, and assessment but the more you talk about them the less progress you’ll make. That doesn’t mean you have to use a term in a way that appears wrong— And never, ever correct a client in public. Instead, use the terms properly and consistently while you talk about what matters: the impact of the work and the results you need to achieve. Once they believe that you’re on the same side, those who are curious will ask for your help. (I had a similar experience with how to pronounce Rensis Likert’s surname. You can learn more about that issue here: http://core.ecu.edu/psyc/wuenschk/StatHelp/Likert.htm )

Lesson Learned: Don’t assume nobody cares about results.

Although many corporate employees may only have experienced the nonprofit world from the perspective of labor, don’t presume they aren’t eager to understand the bigger picture. If you join the conversation ready to draw a clear line from the time, money, or equipment being requested and the goals of (and the evidence that will be used to prove that link), a prospective corporate donor will be far more likely to work with you. They may be interested in sending their employees to help pack food boxes but knowing what are you doing to reduce food insecurity in your community could lead to a more enduring engagement.

Lesson Learned: Be the expert in your area.

Businesses stress accountability, both of their own employees and the organizations they work with. You should present yourself and your nonprofit like the business partner you hope to be: know the basic facts about the community you support, how you compare to others doing similar work, how your impact has grown over time, what may lie ahead, and how you will continue to evaluate the work you do. Your corporate colleague is more likely to trust you with their time, money, or equipment once you’ve demonstrated that those resources will be in capable hands.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello. I am Angie Aguirre from the INDEX program of the Eunice Kennedy Shriver Center at the University of Massachusetts Medical School. INDEX specializes in designing web sites, online courses, learning-management systems, and online databases, all of which are accessible to people with disabilities. Many of you are developing the same as part of your evaluation efforts. At INDEX, accessibility comes first! Since my colleagues and I have concentrated on accessibility issues in a previous blog (see here), I’m continuing here with that theme!

Webinars enable you to present, lecture, or deliver a workshop over the web. Webinars incorporate audio and visual elements, and can sometimes include audience interaction. It can’t be asked too often what makes your webinar accessible? It also can’t be said enough that accessibility promotes a culture of inclusion, as well as supports people with disabilities. The more people we can bring to the table, the better our evaluation efforts and the better we become as a society. Moreover, it’s the law!

Depending on what kind of webinar you’re providing and your audience, you should ask participants at registration if accommodations are needed.

Hot Tips: Choosing the Right Platform

Several features are needed for a webinar platform to be accessible. Be sure to look for:

  • integrated captioning;
  • screen reader compatibility; and
  • multiple ways of communicating with and engaging participants.

Providing Accommodations

  • For Auditory
    • Use Remote CART (Communication Access Real-time Translation). It is a service in which a certified CART provider listens to the webinar presenter and participants, and instantaneously translates all the speech to text. Most CART services are familiar with various types of webinar platforms, and can walk you through set-up.
    • If you are showing a video, be sure you provide captions.
  • For Visual
    • Webinar platform controls should be able to be accessed using keyboard commands.
    • All content should be readable by a screen reader, including the text content of a PowerPoint slide.
    • Provide accessible copies of the entire presentation, including handouts, before the webinar. This enables webinar participants to review the information ahead of time so they can focus on listening to the presenters.
  • For Cognitive
    • Provide a way for participants to respond verbally by phone/microphone, or by typing in a chat pod.
    • Participants should have ability to:
      • use the caption pod and adjust it to their liking;
      • listen to the recorded content at a later time;
      • control the speed of the content that is being delivered; and
      • (presenters/moderators may need to slow down a bit).

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Happy Saturday everyone!  I’m Liz Zadnik, aea365’s Outreach Coordinator.  I have a bit of a confession to make, I may have set too high of a bar with myself – I expected to share reflections from each of the books on my reading list last year.  Definitely missed the mark.  I’m disappointed in myself, but I’m ready to jump back in and start fresh!  So away we go…

I love design and creating accessible and relevant tools, so it made sense I was drawn to Stephanie Evergreen’s newest book, Effective Data Visualization: The Right Chart for the Right Data (I even preordered!).  I have long been a fan of her blog and contributions to the field – I was very excited to dive into the book as my summer beach read.  It offers research on visualization and how folks absorb and best interpret graphs and charts.

Lesson Learned: As a trainer, I’ve been conditioned to share what’s going to happen.  I offer background and some tips for practice (it’s not that simple, but you know…) and I try to be as straightforward as possible.  I’m not a great storyteller and this book helped bridge a gap for me.  It helped me over that last block.  I highlighted and underlined, “We visualize to communicate a point.  We also visualize to add legitimacy and credibility.”  Skills and new information don’t make something meaningful, it’s helping people synthesize the information and make connections.

Rad Resources: Discovering the opportunities of data visualization has been a very exciting process!  This book has really inspired me to discover new ways to tell stories and generate excitement for evaluation and research.  I’m positive I’ll be referring back to highlighted and dog-eared pages of the book, but I’m also interested in some online courses and learning:

I have a few more books on my shelf and even more on my wish list.  I’m hoping I’ll be able to share more thoughts with you in the future.  Feel free to share your favorites with me – I’m always looking for recommendations!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! I’m Kirk Knestis, CEO of Hezel Associates. The US Office for Human Research Projections defines “research” as any “systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge.” (Emphasis mine.) We often get wrapped up in narrow distinctions like populations but I’m increasingly of the opinion that the clearest determination if a study is “evaluation” or “research,” is whether it’s supposed to contribute to generalizable knowledge—in this context, about teaching and learning in science, technology, engineering, and math (STEM).

The National Science Foundation (NSF) frames this as “intellectual merit”—one of two merit criteria against which research projects are judged, its “potential to advance knowledge” in a program’s area of focus. The Common Guidelines for Education Research and Development expand on this, elaborating how each of its six types of R&D might contribute, in terms of theoretical understandings about the innovation being studied and its intended outcomes for stakeholders.

For impact research (Efficacy, Effectiveness, and Scale-up studies), dissemination must include “reliable estimates of the intervention’s average impact” (p. 14 of the Guidelines), so findings from inferential tests of quantitative data. Dissemination might, however, be about theories of action (relationships among variables; preliminary, evolving, or well-specified), or an innovation’s “promise” to be effective later in development. This is, I argue, the most powerful aspect of the Common Guidelines typology; it elevates Foundational, Early Stage/Exploratory, and Design and Development studies to be legitimate “research.”

So, that guidance defines what might be disseminated. Questions will remain about who will be responsible for dissemination, when it will happen, and by what channels it will reach desired audiences.

Lessons Learned:

It will likely be necessary for the evaluation research partner to work with client institutions to help them with dissemination. Many grant proposals require dissemination plans, but they are typically the purview of the grantee, PI, or project manager, rather than the “evaluator.” These individuals may well need help describing study designs, methods, and findings in materials to be shared with external audiences, so think about how deliverables can contribute to that purpose (e.g., tailoring reports for researchers, practitioners, and/or policy-makers in addition to project managers and funders).

Don’t wait until a project is ending to worry about dissemination of learnings. Project wrap-ups are busy enough and interim findings or information about methods, instruments, and emerging theories can make substantive contributions to broader understandings relating to the project.

Rad Resource:

My talented colleague-competitor Tania Jarosewich (Censeo Group) put together an excellent set of recommendations for high quality dissemination of evaluation research findings, for a panel I shared with her at Evaluation 2014. I can’t do it justice here so go check out her slides in that presentation in the AEA eLibrary.

The American Evaluation Association is celebrating Research vs Evaluation week. The contributions all this week to aea365 come from members whose work requires them to reconcile distinctions between research and evaluation, situated in the context of STEM teaching and learning innovations.. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! We are Dana Gonzales and Lonnie Wederski, institutional review board (IRB) members at Solutions IRB, specialists in the review of evaluation research.

Why talk about IRB review for evaluations of science, technology, engineering, and math (STEM) education projects? Most simply, federally funded projects may require it. You may also ask, “Why aren’t all of these evaluations exempt?” IRB reviewers apply the Code of Federal Regulations (CFR) in their decisions. Many STEM evaluations include children. Under CFR rules, only a narrow range of research is exempt from review when it involves children, like research applying educational tests or observations of public behavior where the investigator does not participate. Interviews and focus groups with minors won’t likely qualify for exempt review, as they are seldom part of the normal educational curriculum. Randomization to a control group would not meet exempt category requirements for the same reason. Both would, however, qualify for expedited review, if there is no more than minimal risk for participants.

So, do you need to use an IRB? Ask these questions:

  • Is IRB required by the grant or foundation funding the project?
  • Does the school district require IRB review?
  • Do you intend to disseminate findings in a publication requiring IRB review?

If the answer to any of those questions is “yes,” you need an IRB—at which point uncertainty strikes! Maybe this is the first time you’ll use an IRB (you are not alone) or you remember unpleasant experiences with an academic IRB. Fear not, evaluators! Many IRB reviewers understand the differences between clinical studies and evaluations. Some specialize in evaluations, employing reviewers with expertise in the methods evaluators use, who recognize that phenomenology, grounded theory, ethnography, and autoethnography are valid study approaches. Who wants to educate an IRB when you are paying them? 

Rad Resources:

Hot Tips:

  • Have questions regarding the ethics of recruitment or consent? Some independent IRBs will brainstorm with you and answer “what if” questions. Ask for a complementary consultation with a reviewer.
  • Ready to submit your evaluation for review? Ask the IRB if free pre-review of study documents is provided, to save time prior to formal review. Ask for a list of the documents required by the IRB.
  • Most important, know the review timeframe in advance! If the IRB requires two weeks for review, you need to plan accordingly. Some IRBs routinely review exempt and expedited studies in 24-48 hours, so timeframes can vary widely.

We hope you found the information provided helpful.

The American Evaluation Association is celebrating Research vs Evaluation week. The contributions all this week to aea365 come from members whose work requires them to reconcile distinctions between research and evaluation, situated in the context of STEM teaching and learning innovations.. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Lori Wingate and I am Director of Research at The Evaluation Center at Western Michigan University. I also lead EvaluATE, the evaluation resource center for the National Science Foundation’s Advanced Technological Education (ATE) program.

NSF established the ATE program in response to the Scientific and Advanced-Technology Act of 1992, which called for “a national advanced technician training program, utilizing the resources of the nation’s 2-year associate-degree-granting colleges.” ATE’s Congressional origin, characterization as a training (not research) program, and focus on 2-year colleges sets it apart from other NSF programs. Research is not the driving force of the program—it existed for 10 years before inviting proposals for research.

Since 2003, Targeted Research on Technician Education has been one of several ATE program tracks. Anecdotally, I know the program has found it challenging to get competitive research proposals. Common problems include university-based researchers treating the 2-year colleges as “guinea pigs” on which to try out their ideas, and 2-year faculty being short on research expertise.

While few of ATE’s ~250 projects are targeted research, all must be evaluated. NSF underscored the importance of evaluation when it began supporting the Evaluation Resource Center in 2008. Since 2010, the program has required that proposal budgets include funds for independent evaluators.

At the 2014 ATE PI conference, I moderated a session on ATE research and evaluation in which the Common Guidelines for Education Research and Development figured prominently. These guidelines were developed by NSF and the Institute of Education Sciences as a step toward “improving the quality, coherence, and pace of knowledge development in [STEM] education,” but some participants questioned their relevance to the ATE program. Recent evidence suggests more education is needed. While just 7 of 202 respondents to the 2016 survey of ATE PIs identified their projects as “targeted research,” 58 spent some of their budgets on research activities. Of those, almost half had either never heard of the Common Guidelines (21%) or had heard of but hadn’t read them (28%). I sense that PIs based at 2-year colleges may see the growing concern with research as a threat to the program’s historic focus on training technicians. They seemed to have embraced evaluation, but may not be sold on research.

Lessons Learned:

  • The time is ripe for evaluators with strong research skills to collaborate with ATE PIs on research.
  • Evaluation results (project-specific knowledge) may serve as a foundation for future research (generalizable knowledge), thus connecting evaluation to research.

Rad Resources:

The American Evaluation Association is celebrating Research vs Evaluation week. The contributions all this week to aea365 come from members whose work requires them to reconcile distinctions between research and evaluation, situated in the context of STEM teaching and learning innovations.. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

I am Laurene Johnson from Metiri Group, a research, evaluation, and professional development firm focusing on educational innovations and digital learning. I often work with school district staff to provide guidance and research/evaluation contributions to grant proposals, including those for submission to the National Science Foundation (NSF).

Programs like Discovery Research PreK-12 (DRK-12) present some interesting challenges for researchers and evaluators. Since I work at an independent research and evaluation firm, I don’t implement programs, I study them. This means that in order to pursue such funding, and research things I think are cool, I need to partner with school or district staff who do implement programs. Likely they implement them quite well, maybe even having some experience obtaining grant funding to support them. This is both a real advantage in writing an NSF proposal and a real challenge. A successful research partnership (and proposal) will involve helping the practitioners understand where their program fits into the entire proposed project. It will likely be difficult for these partners to understand that NSF is funding the research, and funding their program or innovation only because I’m going to research it. This can be a huge shift for people who have previously received funding to implement programs. Depending on the origin of the program, the individual I’m partnering with might also have a real attachment to the program, which can make it even more difficult to explain that it’s going to “play second fiddle” to the research in a proposal.

This is not an easy conversation to have but, if researchers are successful, we can likely open up many more doors in terms of partnership opportunities in schools.

Hot Tip: Be prepared to have the research-versus-implementation conversation multiple times. Especially, I think someone who has written many successful proposals will tend to revert back to what s/he knows and is comfortable with as the writing progresses.

Lesson Learned: Even if prior evaluations have indicated it might be effective, the client must clearly explain the research base behind the program design and components. My experience is that many programs in schools are designed around staff experience about what works, rather than having a foundation in what research says works (emphasizing instruction as an art rather than as a science). This may be fine for implementing the program, but falls short of funders’ expectations in terms of designing an innovation in a research context.

Hot Tip: Try to get detailed information about the program in very early conversations, so you can write the research description as completely as possible. Deliver this to the client as essentially a proposal template, with the components they need to fill in clearly marked.

The American Evaluation Association is celebrating Research vs Evaluation week. The contributions all this week to aea365 come from members whose work requires them to reconcile distinctions between research and evaluation, situated in the context of STEM teaching and learning innovations.. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

No tags

Older posts >>

Archives

To top