AEA365 | A Tip-a-Day by and for Evaluators

CAT | Research, Technology and Development Evaluation

Hello! I am Yaw Agyeman, Program Manager at the Lawrence Berkeley National Laboratory. I am joined by my writing partner Kezia Dinelt, Presidential Management Fellow at the U.S. Department of Energy’s Office of Energy Efficiency and Renewable Energy (EERE), to share how EERE developed and institutionalized a rigorous evaluation practice to quantify the impacts of EERE programs and investments.

Here’s the premise: Imagine you are brought into a federal agency with multiple energy programs, each of them with multiple portfolios encompassing investments in research, development, demonstration, and deployment (RDD&D) projects. Now you’re tasked with developing a rigorous evaluation process. What would you do?

We developed a holistic framework for program evaluation—a systemic approach that borrows from organizational psychology, institutional change, and principles of persuasion. Elements of the framework include

  1. Developing resources—guidance and tools for conducting and reviewing evaluation studies, including a guide on program evaluation management, a peer review method guide, a uniform method for evaluating realized impacts of EERE R&D programs, a non-RD&D evaluation method guide, and a quality assurance protocol to guide evaluation practice.
  2. Providing program evaluation training for organizational staff.
  3. Developing institutional links with the organization’s technology offices, budget office, communications team, stakeholder engagement team, project management office, and others.
  4. Developing data collection protocols for ongoing tracking of routine evaluation data.
  5. Developing an impact results repository and reporting tool for use across the organization.
  6. Partnering with the technology offices to plan and conduct evaluations involving third party experts, feed the results back into program improvement, and communicate findings to target stakeholders.

Lessons Learned: Seeding these pillars of evaluation practice within the federal organization has involved varying applications of the principles of organizational change, which scientists at the Lawrence Berkeley National Laboratory have filtered down to a dynamic interaction between the “roles, rules, and tools” for behavioral change within an institution. The implementation has been undulating, nonlinear, and has taken more than 8 years with fits and starts. But, EERE’s evaluation team successfully built evaluation capacity within EERE by tapping into the vast pool of evaluation expertise across the nation to help frame and mold this institutional change.

Over time, the victories have piled up: (1) nearly one-third of all R&D portfolio investments across EERE have been evaluated, revealing spectacular returns on investments; (2) program staff are increasingly conversant in the language of evaluation, and there is an active and abiding interest in commissioning evaluations and using results; (3) the organization has established a set of core evaluation metrics and measures that are adaptable for use by most program investments; (4) the guides and tools developed for evaluation are being used; and (5) a growing culture of evaluation through the guides and tools is leading to innovations in evaluation practice, such as the “Framework for Evaluating R&D Impacts and Supply Chain Dynamics Early in a Product Life Cycle,” which is the first of its kind anywhere across the federal government. It can be done.

The American Evaluation Association is celebrating Research, Technology and Development (RTD) TIG Week with our colleagues in the Research, Technology and Development Topical Interest Group. The contributions all this week to aea365 come from our RTD TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Julie Bronder Mason

My name is Julie Bronder Mason, Ph.D., and I am the Deputy Director of the Office of Science Policy, Planning, and Communications at the National Institute of Mental Health.  I have spent a fair number of years conducting, overseeing, advising, and presenting on program evaluations, and the tips I will share stem from a corpus of professional presentation coaching; AEA’s Potent Presentations Initiative (p2i); communications and leadership development courses; and practice, experience, and observation.

Lesson Learned: Give thought to your (often neglected) transitions!

Often, presenters place primary emphasis on slide content and design, and give little (or no) thought to transitions within and between those striking slides!  So how can you polish your evaluation presentation and provide a seamless flow?

Hot Tip # 1: Use the logic model as a unifying thread

Just as your logic model is the guiding light for your evaluation, consider using it as the cornerstone for your presentation.  Reveal the elements in tandem, fading away components you have already discussed or have not yet reached.  Return to the logic model as a reminder throughout the talk.  For instance, “we just highlighted the input variables, and before diving into the specifics (fade to gray), let’s discuss program activities (emphasize) and how we will collect our data.”

Hot Tip # 2: Reflect on within-slide transitions

If you must use a bulleted list in your slides, think about the relationship between those list items. Why did you group them together in the first place?  Imagine a slide where you will be describing data you are collecting on biomedical research training program outcomes.  Your slide may have the following three bullets: early-stage investigators, co-authorship networks, and subsequent publications. You could tick those outcomes off in list fashion, (e.g., “bread, butter, cheese”) or you could appeal to your audience with the linkage between those items (“aha, we’re making grilled cheese”)!  Rather than discuss each bullet separately, define how they interlock. “We’re collecting data on outcomes from our training program that include examining how many new early-stage investigators have emerged, because expansion of this population will be an indicator of workforce sustainability.  How well this workforce collaborates, as estimated by development of co-authorship networks, is key to understanding information dissemination…” and so forth!

Hot Tip # 3: Plan (and practice) between-slide transitions

Even more crucial than the within-slide transition is the between-slide transition.  Here again, a little planning can reap large gains.  In the notes section of your slides, jot down a sentence or two to connect your evaluation thoughts from one slide to the next.  Your goal is to facilitate an introduction to the next slide and speak to it before advancing.  Resist the urge to click ahead and pause dazed, wondering how you landed on that next slide.  And practice those transitions!  Be familiar enough with the transition material so you can convey it in a variety of ways without appearing rehearsed.

Tips like the above three are simple to implement and can showcase you as a seasoned presenter!

The American Evaluation Association is celebrating Research, Technology and Development (RTD) TIG Week with our colleagues in the Research, Technology and Development Topical Interest Group. The contributions all this week to aea365 come from our RTD TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Hello!  We are Clara Pelfrey, Translational Research Evaluation TIG Chair and evaluator for the Clinical and Translation Science Collaborative (CTSC) at Case Western Reserve University, and Johnine Byrne, a graphic recorder and owner of See Your Words.  We’d like to introduce you to graphic recording (GR), a valuable tool for evaluators that we’ve used in research, technology and collaboration settings.
What if you could capture a meeting’s ideas and energy and use them to evaluate a program, to generate qualitative data or to motivate future change? You can, because a picture is worth 60,000 words. According to Business 2 Community author Rita Pant, ninety percent of the information sent to the brain is visual, and 93% of all human communication is visual. Why not harness all that power for evaluation?

Graphic recording (GR) is a visual capturing of people’s ideas and expressions – in words and drawings – and can be a catalyst for generating new ideas, for aiding comprehension, or to help people see emerging patterns in group interactions. At a CTSC retreat, we asked attendees “What are we going to be known for?” and used the GR to develop evaluation questions and as a future vision of our research collaborative.

Hot Tip:

As an evaluator, how can graphic recording (GR) help you?

  • Assessing stakeholder program readiness for evaluation, as recommended by Michael Quinn Patton in the Essentials of Utilization-Focused Evaluation.
  • Brainstorming. People see their ideas take shape, increasing participation in the meeting and reducing distractions. The GR reminds them of what transpired and motivates them to take action.
  • Creating timelines. Demonstrates where the group came from and where they are heading. It reminds them of future goals and how they fit into making that vision a reality.
  • Capturing a dynamic talk. Can’t remember what a talk was about? You will if it’s captured in drawings!
  • Promoting your organization. The GR image is used in social media, advertising and newsletters.
  • World Café. Large group meetings use GR to engage everyone in a dialog and all are encouraged to draw.

Lessons Learned:

Examples of how graphic recording (GR) be useful in evaluating research, technology and collaboration:

  • A medical device manufacturer used GR at an all-hands meeting to work through a major glitch in their manufacturing process. They brainstormed solutions and ways to get past roadblocks.
  • A world-renowned medical research center used GR as a tool to promote communication between research groups working in the same institution. Once attendees viewed the GR they could see the possibilities, promoting the creation of new collaborations.
  • A researcher used GR in her focus groups. Participants saw what others had said and they wanted to be heard too, increasing participation and promoting emergence of different viewpoints.

Rad Resources:

The American Evaluation Association is celebrating Research, Technology and Development (RTD) TIG Week with our colleagues in the Research, Technology and Development Topical Interest Group. The contributions all this week to aea365 come from our RTD TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Christina Freyman

Hi, I am Christina Freyman, a Director in the Center for Innovation Strategy and Policy at SRI International. Today I am writing about non-survey sources of data for evaluation – specifically related to the evaluation of research programs. Evaluations of programs inevitably suffer from biases related to missing or unreliable data, particularly when the goal is to measure a program’s impact on human capital. Assessing human capital often relies on survey-based research, which is prone to a number of biases. In addition, it is extremely costly to identify and survey individuals who did not participate in the program being evaluated (non-participants) to obtain a comparison group. You can overcome these challenges through the novel application of a text-analytic, non-survey-based approach.

Hot Tip: Apply text analytics to measure skills from participant and non-participant resumes.

Resumes are freely posted on resume sites and allow researchers to create robust comparison groups. Structured resumes can be parsed and ingested into a Mongo database with software code. Skills can be extracted from the resumes based on a skill list, and skills can be manually coded as related to the topic of the evaluation. For example, we have algorithmically tagged resumes with energy efficiency skills. Using the listed skills, you can identify jobs that were energy efficiency jobs on a large scale.

For the comparison groups, you can use an automated webcrawler to obtain resumes (with the permission of the website). For example, we have investigated three analysis groups in a recent evaluation: a random sample of participant resumes (N=109), a comparison cohort of non-participant resumes drawn from individuals attending the same school programs during the same time period (N=301), and a comparison group of energy efficiency job holders (N=867).  Using algorithms, we were then able to quickly and efficiently determine the following:

  • Participants had more energy efficiency skills than comparison groups.
  • Participants entered an energy efficiency job faster than the energy comparison group.
  • Participants spend a larger portion of their subsequent career in an energy efficiency job than energy comparison group.

Lessons Learned:

Evaluations of education and skill-building programs are frequently conducted using surveys, which have significant shortcomings when attempting to understand skills and career trajectories. Using a readily available data source that provides machine-readable resumes, we developed and employed a methodology that produced a more reliable and detailed understanding of the impact of a program on the skills and careers of energy-focused professionals.

The American Evaluation Association is celebrating Research, Technology and Development (RTD) TIG Week with our colleagues in the Research, Technology and Development Topical Interest Group. The contributions all this week to aea365 come from our RTD TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

My name is Di Cross from Clarivate Analytics. We conduct evaluations of scientific research funded by government agencies, non-profits, academic institutions or industry.

I cringe when I hear mention of ‘unbiased analysis’. What an oversimplification to state that an analysis (or evaluation) is unbiased! Everyone carries their own biases. Some exist as part of our brain’s internal wiring to enable us go about our day without being paralyzed by the tremendous amount of information that our sensory systems constantly receive.

But what specifically do I mean by bias?

In statistics, bias in an estimator is the difference between the expected value of the estimator and the population parameter which it is intended to measure. For example, the arithmetic average of random samples taken from a normal distribution is an unbiased estimator of the population average. As even Wikipedia points out, ‘bias’ in statistics does not carry with it the same negative connotation it has in common English. However, this is in the absence of systematic errors.

Systematic errors are more akin to the common English definition of bias: ‘a bent or tendency’, ‘an inclination of temperament or outlook; especially…a personal and sometimes unreasoned judgment, prejudice; an instance of such prejudice.’

So what do we do?

Hot Tip #1: Don’t panic!

Do not fool yourself into thinking that you can design and conduct evaluations which are 100% free of bias. Accept that there will be bias in some element of your evaluation. But of course, do your best to minimize bias where you can.

Hot Tip #2: Develop a vocabulary about bias

There are many sources of bias. Students in epidemiology, the discipline from which I approach evaluation, study selection bias, measurement error including differential and non-differential misclassification, confounding, and generalizability. There are also discussions of bias specific to evaluation.

Hot Tip #3: Adjust your design where possible

After identifying potential sources of bias in your study design, address them as early in your evaluation as possible – preferably during the design phase. Alternatively, addressing bias might also mean performing analysis differently, or skipping to Hot Tip #4.

(Note: There is something to be said accepting a biased estimator – or, dare I say, a biased study design – over one that is unbiased. This might be because the unbiased estimator is vastly more expensive than the biased estimator which isn’t too far off the mark. Or it might be for reasons of risk: Wouldn’t you rather consistently underestimate the time it takes to bake a batch of cookies, rather than be right on average, but risk having to throw away a charred batch half of the time?)

Hot Tip #4:  Be transparent

Where it is not possible to address bias, describe it and acknowledge that it exists. Take it into consideration in your interpretation. As a prior AEA blog writer put it, ‘out’ yourself. Be forthcoming about sources of bias and communicate their effect on your evaluation to your audience.

The American Evaluation Association is celebrating Research, Technology and Development (RTD) TIG Week with our colleagues in the Research, Technology and Development Topical Interest Group. The contributions all this week to aea365 come from our RTD TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Sara Dodson, and I work in the Office of Science Policy at the National Institutes of Health (NIH), where I have led the development of a series of case studies on biomedical innovations. Today I’ll kick off the Research Technology & Development TIG week by discussing the challenging work of tracing public research investments to societal impact, framed by the question most germane to NIH, “How do we – a federal agency who supports scientific research on the basic side of the R&D spectrum – track progress towards our mission of improving health?” The complexities of this question are rooted in several factors: the long, variable timelines of translating research into practice, the unpredictable and nonlinear nature of scientific advancement, and the intricate ecosystem of health science and practice (the multitude of research funders & policymakers, academic & industry scientists, product developers, regulators, health practitioners, and a receptive public), just to name a few.

My team started out with two core objectives: 1) develop a systematic approach to tracing the contribution of NIH and other key actors to health interventions, and 2) identify a rich tapestry of data sources that provide a picture of biomedical innovation and its attendant impacts.

Hot Tip:

To make this doable, we broke off bite-sized pieces, conducting case studies on specific medical interventions.  We chose an existing intervention in health practice and performed both a backward trace and forward trace.  Moving backward from the intervention, we searched for and selected pivotal research milestones, reaching back into basic research findings that set the stage for progress and documenting the role of NIH and others along the way.  Moving forward, we looked for evidence of the intervention’s influence on health, knowledge, and other societal impacts.

(click for larger image)

Rad Resources:

Dozens of data sources proved useful.  For each of the information categories we examined, the table above illustrates the types of evidence that we searched for and some of the data sources that we utilized.[1]

Lessons Learned:

Data needs – The development of more comprehensive and structured datasets (e.g., data related to FDA-approved drugs, biologics, and devices) with powerful search and export capabilities are needed.  Even further, wide-scale efforts to mine and structure citations in various sources – like FDA approval packages, patents, and clinical guidelines – would be very useful.

Tool needs – These studies are data and time-intensive, requiring a couple months of full time effort to conduct.  Sophisticated data aggregators could help semi-automate the process of identifying “milestones” and linking medical interventions to changes in population-level health outcomes and other societal impacts.

Uses – There are many potential uses of these studies, including for science communication and for revealing patterns of successful research-to-practice pathways & the influence of federal funding and policies.  We have published a handful of case studies as Our Stories on the Impact of NIH Research website – I invite you to take a look!

[1] Note that only open access data sources are included here. We also made use of proprietary data sources and databases available to NIH staff.

The American Evaluation Association is celebrating Research, Technology and Development (RTD) TIG Week with our colleagues in the Research, Technology and Development Topical Interest Group. The contributions all this week to aea365 come from our RTD TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

We are Shannon L. Griswold, Ph.D., a scientific research evaluator and member of AEA’s Research Technology and Development TIG, Alexandra Medina-Borja, Ph.D., Associate Professor of Industrial Engineering at University of Puerto Rico-Mayaguez, and Kostas Triantis, Ph.D., Professor of Systems Engineering at Virginia Tech. We are thinking about new ways to envision and evaluate impacts from discovery-based scientific research. Tracing dollars spent on funding research in universities to societal impacts is very difficult due to the long time lag between experimentation and commercialization, and the serendipitous nature of discovery.

Lesson Learned: Even though we can’t predict every outcome of scientific research, we can apply a general framework that allows us to envision the complex system of scientific discovery and identify areas of inquiry that could lead to major breakthroughs.

Hot Tip: Gather your research community and ask them to think backwards from societal needs (e.g., in transportation research this might be a solution for traffic congestion). This can be HARD for fundamental researchers; they are accustomed to letting curiosity drive their research questions. From societal needs, ask them to map several enabling technologies that could meet that need. Enabling technologies should be things that could solve that need but that don’t exist yet (e.g., teleportation). Finally, from enabling technologies, ask your research community to map out knowledge gaps. These are the things that we don’t know yet, which prevent us from developing enabling technologies (e.g., how do you convert all the mass in a human body into energy without blowing things up? How do you reassemble that energy at the destination into a human body?). It can be helpful to frame knowledge gaps as questions.

Hot Tip: Use societal needs, enabling technologies, and knowledge gaps to perform a content analysis of your research portfolio. How many of the topics are already funded? How many topics are not yet represented in the portfolio? This analysis should be performed in the context of a portfolio framework, which may help you envision the scope of your funding program’s discipline and relation to other funding streams.

Rad Resource: When mapping societal needs, enabling technologies, and knowledge gaps, it can be helpful to place them in a hierarchical framework to track their relationships. In this diagram, dotted lines show the direction in which the logic framework is generated, working backwards from societal needs. Solid arrows show the flow of scientific knowledge, from discoveries (knowledge gaps) to technologies that meet societal needs.

Logic tree generic color v2

Rad Resource: The flow of knowledge and information in the scientific process is rarely linear. It is probably more accurately represented as a “ripple effect”. We can predict some discoveries and technologies (darker polygons), but others are emergent, and knowledge flows in all directions.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Greetings from Catholic Relief Services (CRS)! We, Suzanne Andrews and Shaun Ferris, from the Baltimore-based Agriculture and Livelihoods Program, presented at the American Evaluation Association’s Annual Conference in Washington DC on a Farmbook suite of on/off line tools to help us to better build capacity, gather data and develop business plans with smallholder farmers.

Andrews

Photo by Suzanne Andrews (Catholic Relief Services)

Lesson Learned: A Challenge: One of the key problems we face in working with smallholder farmers is understanding who our clients are, where they live, their cropping systems, their costs of production and the market opportunities near their communities. There are very few tools to help field agents gather these types of monitoring and evaluation data in a systematic way and few means of aggregating and sharing this information.

A Product: CRS has been working to develop tools that help field agents to develop farmer group business plans, to gather data on production and profitability levels, sharing this information with farmers, local project managers and globally through a digital data platform.

Rad Resources: We manage, analyze and share our data through cloud-based data management systems that allow global users from CRS and other organizations to view our data and create customized reports.  We are also working with Nethope’s cloud services, creating webinars to share ideas and get feedback, and also link with potential users. We have held several webinars about our e-learning platforms and the business planner /profitability tool . We also share the information through our  ICT4D conferences that we hold every year, in Africa.

Lessons Learned: Field agents who tested the Farmbook business planner and profitability calculator, performed much better when they first enrolled in the e-learning course in marketing and gross margin analysis. We have developed comprehensive training curricula for smallholder capacity building, to support the farm business plan development and data gathering process.

Developing the Farmbook suite required a team of people with diverse expertise, ranging from agriculture advisors, software architects, programmers, instructional designers, subject matter specialists, editors, and artists, to innovative field managers and field agents to design, develop and test the beta versions of Farmbook.  Holding that team together in the build, test and deploy phases has been critical to getting to the starting point. We are still working on the business models!

Get Involved: If you would like to test drive our learning tools, the Farmbook business planner, or the Map and Track service delivery audit, let us know!  Contact Suzanne.Andrews@crs.org to request a training version of the software, allowing you to assess the profitability of your farm and your farmers!

The American Evaluation Association is celebrating Information and Communication Technology for Development (ICT4D) for Monitoring, Evaluation, Accountability and Learning (MEAL) week. The contributions all this week to aea365 come from members who work in ICT4D for MEAL. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Hi, my name is Marianna Hensley, Program Quality Manger for Health with Catholic Relief Services (CRS) in India. I currently support the Reducing Maternal and Newborn Deaths (ReMiND) project that CRS implements in partnership with Dimagi, Inc. and Vatsalya.

The ReMiND project works with government community health workers (CHW) to improve the frequency and quality of their home visits to women and children. CHWs use basic mobile phones operating Dimagi’s open-source CommCare software, which equips them with job aids to support client assessment, counseling, and early identification, treatment and/or rapid referral of complications. With the project’s use of CommCare as a case management tool and job aid for CHWs, leveraging information and communication technologies (ICT) for project monitoring and evaluation (M&E) with the same software platform was an obvious choice for ReMiND. All routine project monitoring is done through CommCare operated on basic mobile phones while data collection for the project’s baseline household survey was done using CommCare on tablets.

Lessons Learned: For all data nerds out there, imagine the excitement of realizing that ICT-enabled M&E means you get all those numbers now! Beware the lure of real-time data with ICT for M&E.

Hensley

Photo by Marianna Hensley (Catholic Relief Services)

With the use of ICT for data collection in either routine monitoring or evaluation comes the strong temptation to ask every question you can think of—just because it’s so easy to capture responses with fewer worries about the delays or errors typically associated with manual data entry following paper-based collection. The risks are multiple: 1) you find yourself left with more data than you can or feasibly will analyze and use; and 2) you hazard user (data collector) and respondent fatigue from a questionnaire that delves too deeply into non-essential information.

Faced with the lure of real-time data from ICT, M&E practitioners must remember more than ever to focus on the need to know information that supports project or evaluation decision-making and objectives.collector) and respondent fatigue from a questionnaire that delves too deeply into non-essential information.

 

Hot Tips:

  • Make sure to choose an ICT device that fits your needs in terms of screen size and resolution. Long questions or lists of select options are easier to deal with on a larger-screen rather than on a smaller-screened device that requires scrolling.
  • Don’t forget to assess the battery life of your device as part of field testing an ICT tool. And have a plan that includes resources such as solar or car chargers to ensure devices are adequately charged throughout data collection or monitoring.

Rad Resources: The ReMiND project’s monitoring tool application and baseline survey application are available for free download on CommCare Exchange.

ReMiND is featured as a case study and the example of M&E in mobile health programming in the Global Health e-Learning Center’s  new mHealth Basics course.

The American Evaluation Association is celebrating Information and Communication Technology for Development (ICT4D) for Monitoring, Evaluation, Accountability and Learning (MEAL) week. The contributions all this week to aea365 come from members who work in ICT4D for MEAL. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi, my name is Shenkut Ayele, Early Warning Assessment and Response Manager with Catholic Relief Services (CRS) for the Joint Emergency Operation (JEOP) in Ethiopia. JEOP is a USAID-funded emergency food assistance program that over two years is providing food aid to almost 1 million people. The program operates across Ethiopia and is a partnership between many agencies and government.

Until last August, I faced serious challenges: data were slow to arrive and often of poor quality. As a result, reports were delayed and decision-making hampered with serious consequences for JEOP’s ability to respond effectively. However, since August 2012, JEOP has been using an innovative solution that is strengthening our ‘Participatory Early Warning and Response System’. We are using DataWinners, an SMS-based solution implemented in partnership with Human Network International. Registered individuals across 79 districts collect and upload data via SMS each week onto a web-based database. I am able to use these data in real time to inform decision makers. Here are two graphics how the system works and the data collection and information flow.

Ayele 1

Ayele

Lessons Learned: After implementing our system for one year, we have learned that:

  • Vulnerable communities should be viewed as both sources and recipients of early warning information.
  • Adoption of our new SMS-based system has empowered local officials who are now using the reports to undertake better estimates of the number of individuals who might be affected by a disaster.
  • Local officials are better able to represent the needs of vulnerable communities in discussions at higher levels of government.
  • Local officials and others in JEOP have found the better quality data has improved their ability to target the most vulnerable communities.
  • The system has the potential to accommodate other innovative uses, and government officials have expressed their interest to adopt the SMS system more widely.

Hot Tip: An effective SMS-based system provides a strong basis for a participatory early warning and response system because it enhances the likelihood that any data generated will be used to support better decision-making among different users.

The American Evaluation Association is celebrating Information and Communication Technology for Development (ICT4D) for Monitoring, Evaluation, Accountability and Learning (MEAL) week. The contributions all this week to aea365 come from members who work in ICT4D for MEAL. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

Older posts >>

Archives

To top