Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Walking Our Talk: Charting the Course of Data Use – Collaborative Data Analysis by Sharon Twitty, Natalie Lenhart, and Paul St Roseman

In 2019, Sharon, Natalie, and I (The ARCHES Evaluation Team) held a series of meetings that focused on developing and implementing the ARCHES Collaborative Diagnostic Tool (ACDT).  It was decided that this tool would be used in a multi-year pilot with the Tulare-Kings College and Career Collaborative (TKCCC).  The focus of this effort was to implement the ACDT with the goals of testing and refining the language, validating and calibrating outcome indicators, clarifying roles for collaboration, developing data visualizations, and developing protocols that support the collaborative analysis of data. 

Walking Our Talk: New Insights – Emerging A Collaborative Diagnostic Tool and Data Visualization by Sharon Twitty, Natalie Lenhart, and Paul St Roseman

In spring of 2018, Sharon, Natalie, and I (ARCHES Evaluation Team) began to examine the insights gained from juxtaposing the Self-Assessment Survey with the Outcome Indicators of the ARCHES Logic Model. The Self-Assessment Survey was initially created as a reflective instrument for clients to independently gauge their progress. However, the Outcome Indicators, identified through the comparative analysis, were diagnostic by nature. These indicators were intended to inform how ARCHES could tailor its support for a collaborative’s development via its Just in Time Service Model.

Walking Our Talk: Using the Data – Emerging a Data Informed Evaluation Design through Peer Editing by Sharon Twitty, Natalie Lenhart, and Paul St Roseman

The Evaluation Teams’ work in 2017 also intersected with the development of an evaluation design.  In a previous evaluation effort, ARCHES had developed a self-assessment survey tool that was used to document the development of intersegmental collaboratives.  From this tool a list of indicators was developed and comparatively analyzed to outcome indicators listed in the logic model.  This process resulted in a refined set of outcome indicators that served as the foundation for developing an evaluation design for ARCHES.  As the lead evaluator, I developed an initial draft of the Evaluation design and presented it to Sharon and Natalie in January of 2018.  They were tasked with peer editing the document which would be finalized and approved by March 2018.

Walking Our Talk: Laying the Foundation – The Rise of the Logic Model by Sharon Twitty, Natalie Lenhart, and Paul St Roseman

In 2005, The Alliance for Regional Collaboration to Heighten Educational Success (ARCHES) was established in California as a statewide voluntary confederation of regional collaboratives.  Over the years, it provided technical assistance to alliance members.  In 2016, ARCHES decided to refocus its services.  Sharon Twitty (Executive Director, ARCHES) and I have worked to respond to this new charge by helping to develop evaluation products and systems that support the work of the Regional Intersegmental Collaboratives. 

Every Child Everywhere: Assessing USAID’s Multi-Sectoral Nutrition Strategy by Caitlin Showalter

Hi folks, my name is Caitlin Showalter—I am a Technical Advisor in Monitoring, Evaluation, and Learning with the Palladium Group and based in eastern North Carolina. This was my first year attending the AEA conference, and what an impressive gathering of minds, stories, and hopes for the future of evaluation! I presented as part of a group of panelists to discuss evaluation in global health programs, representing the USAID-funded Data for Impact (D4I) project and our work on the Second Periodic Assessment of USAID’s Multi-Sectoral Nutrition Strategy (MSNS).

Evaluating Research for Development by Svetlana Negroustoueva and John Gargani

My name is Svetlana Negroustoueva and I lead Evaluation Function in CGIAR, an agriculture research for development organization.

And my name is John Gargani. I’m a former AEA President and a member of CGIAR’s Evaluation Reference Group.

What is Research for Development?

Research for development (R4D) is scientific research undertaken to improve the lives of people and the environment. Often, it produces innovations with the potential to create transformational impacts. When evaluating research for development, two domains must be addressed: the quality of science and development impacts. We describe how this is done at CGIAR, which recently published a new framework for evaluating R4D.

Stories of Change from Refugees and People Living with Disabilities that Changed Programming by Soledad Muniz

Hi, my name is Soledad Muniz. I am the Director of Programmes at InsightShare, a
not-for-profit based in the UK championing the use of Participatory Video combined
with the Most Significant Change technique (PV MSC) for participatory monitoring and
evaluation. In 23 years, we have worked in over 70 countries on more than 500 projects worldwide.

EvalSDGs Week: Institutionalization of Evaluation in Small Island Developing States: Case Study Mauritius by Rooba Moorghen

I am Dr (Mrs) Rooba Moorghen, former Hubert Humphrey Fellow, and former Permanent Secretary. As a government practitioner in Mauritius, I have acquired more than forty-five years of experience in public sector Management, Public Administration and Public Policy. In the context of the 2023 American Evaluation Association (AEA) Conference valuing stories, I find it befitting …

EvalSDGs Week: Institutionalization of Evaluation in Small Island Developing States: Case Study Mauritius by Rooba Moorghen Read More »

EvalSDGs Week: Charting the Course: Nigeria’s Journey in Establishing a National Monitoring and Evaluation Policy – The Way Forward by Zakariyau Lawal and Denis Jobin

We are Dr. Zakariyau Lawal, former Director of the M&E department at the National Planning Commission, and Denis Jobin, current Senior Evaluation Specialist at the Evaluation Office of UNICEF in New York. We are policy experts and development partners with experience in developing and implementing monitoring and evaluation frameworks in Nigeria and Canada. In this …

EvalSDGs Week: Charting the Course: Nigeria’s Journey in Establishing a National Monitoring and Evaluation Policy – The Way Forward by Zakariyau Lawal and Denis Jobin Read More »

EvalSDGs Week: The Power of Networking and Professional Growth in Evaluation by Eddah Kanini

Hello reader. My name is Eddah Kanini, a Monitoring, Evaluation and Gender specialist and trainer. I want to share how joining a professional network like the EvalSDGs guidance group of the EvalPartners has contributed to my personal and professional transformative change and growth. I joined the network as a young and fairly shy person but …

EvalSDGs Week: The Power of Networking and Professional Growth in Evaluation by Eddah Kanini Read More »