Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Search Results for: data analysis

Staff Making Meaning from Evaluation Data by Lenka Berkowitz and Elena “Noon” Kuo

Greetings! I’m Paula Richardson from Salanga, an organization dedicated to reimagining monitoring and evaluation practices for community ownership, gender equality, and transformative change. Through our work with 50+ communities worldwide, we have gained insights and identified key takeaways from implementing a Community-Led monitoring, evaluation, accountability and learning approach (CoLMEAL).

Walking Our Talk: New Insights – Emerging A Collaborative Diagnostic Tool and Data Visualization by Sharon Twitty, Natalie Lenhart, and Paul St Roseman

In spring of 2018, Sharon, Natalie, and I (ARCHES Evaluation Team) began to examine the insights gained from juxtaposing the Self-Assessment Survey with the Outcome Indicators of the ARCHES Logic Model. The Self-Assessment Survey was initially created as a reflective instrument for clients to independently gauge their progress. However, the Outcome Indicators, identified through the comparative analysis, were diagnostic by nature. These indicators were intended to inform how ARCHES could tailor its support for a collaborative’s development via its Just in Time Service Model.

Walking Our Talk: Using the Data – Emerging a Data Informed Evaluation Design through Peer Editing by Sharon Twitty, Natalie Lenhart, and Paul St Roseman

The Evaluation Teams’ work in 2017 also intersected with the development of an evaluation design.  In a previous evaluation effort, ARCHES had developed a self-assessment survey tool that was used to document the development of intersegmental collaboratives.  From this tool a list of indicators was developed and comparatively analyzed to outcome indicators listed in the logic model.  This process resulted in a refined set of outcome indicators that served as the foundation for developing an evaluation design for ARCHES.  As the lead evaluator, I developed an initial draft of the Evaluation design and presented it to Sharon and Natalie in January of 2018.  They were tasked with peer editing the document which would be finalized and approved by March 2018.

ENRI Week: Fostering learning and capacity building in data and evaluation through campus-community partnerships by Dan Turner

I’m Dan Turner, Ph.D., and I serve as the Assistant Director of the Community-Engaged Data and Evaluation Collaborative (CEDEC) at the Swearer Center for Public Service at Brown University in Providence, Rhode Island. A recently launched initiative, CEDEC connects campus, nonprofit, and public agency partners, leveraging Brown’s resources to advance data and evaluation capacity in …

ENRI Week: Fostering learning and capacity building in data and evaluation through campus-community partnerships by Dan Turner Read More »

No More Crappy Survey Analysis – Best Practices in Survey Analysis for Evaluations by Janelle Gowgiel, JoAnna Hillman, Mary Davis, and Christiana Reene

Janelle, JoAnna, Mary, and Christiana here, evaluators from Emory Centers for Public Health Training and Technical Assistance. We had the opportunity to present a session entitled No More Crappy Surveys at last year’s AEA Summer Evaluation Institute. We are on a mission to rid the world of crappy surveys, and are here to share some of our Hot Tips and Rad Resources to do so.

If you haven’t already, check out the first blog post in this series, No More Crappy Surveys – Best Practices in Survey Design for Evaluations (you can check it out here). Today, we’ll be following up with some tips on how to analyze your surveys (which, of course, you’ve made sure are not crappy!). Stay tuned for our final post of this series, on how to report your findings to different audiences.

Global Evaluation Initiative (GEI) Week: Lessons Learned in Using GEI’s “Monitoring and Evaluation Systems Analysis” (MESA) Tool by Heather Bryant

Hi, I am Heather Bryant, a member of the Global Evaluation Initiative (GEI) Global Team. The GEI supports developing countries in strengthening their monitoring and evaluation (M&E) systems to help governments gather and use evidence that improves the lives of their citizens. The GEI believes that to effectively support countries in this process and to be able to provide tailored, context-specific advisory services, it is first necessary to understand the existing systems that affect M&E in the country. To assist in this effort, the GEI Global Team, in collaboration with the global network of Centers for Learning on Evaluation and Results (CLEAR), developed the Monitoring and Evaluation Systems Analysis (MESA) diagnostic tool. The MESA is a tool that guides stakeholders (e.g., government entities, evaluation professionals, civil society) in gathering, structuring, and analyzing information on the current capacity of their country’s M&E ecosystem. Since the tool’s launch in early 2022, the GEI Global Team and colleagues in the CLEAR network have been using it to help identify what is working well, what needs to be improved, and to inform capacity-development strategies meant to strengthen the systems that enable M&E to flourish. We have learned a few things along the way – both in the development and in the use of the tool.

Tech TIG Week: Supporting Data Analytics with Google BigQuery by Wai Lam Wong

Hi everyone, my name is Wai Lam Wong and I’m an information systems analyst at a community health organization in California. My academic background is in computational social sciences and I’m always looking for ways to leverage technology for social impact. This post is about using Google BigQuery to enable a small nonprofit’s project team …

Tech TIG Week: Supporting Data Analytics with Google BigQuery by Wai Lam Wong Read More »