Hello! I’m Dawn Pauls, a program evaluator living in Saint Paul, MN. This blog post is adapted from my final paper in a recent course in International Development Program Evaluation at the University of Minnesota. Designing innovative evaluation methods for determining to what extent the world’s most vulnerable children are assured of their human rights could help to improve children’s quality of life and ensure a healthier and more competent upcoming generation. The Child Rights in Practice Accountability Model (CRIPA) provides a useful framework for evaluation practitioners to use with recent international development evaluation method trends to determine to what extent children’s rights are upheld.
Indicators that consider the child’s lived experience as the central focal point, the interventions and mechanisms that affect the child directly, and the mandates which govern those interventions provide a meaningful evaluation framework for stakeholders that affect children’s lives. A child rights-based approach (such as the CRIPA model) to M&E differs from a needs-based approach due to its foundational understanding of society’s ‘legal and moral accountability as duty-bearers towards children as rights-holders’. A child rights approach, therefore, focuses on putting children at the center, understanding root structural causes of ‘nonrealization’ of rights and identifying gaps and analyzing who bears the responsibility to uphold rights.
One significant trend in the international development evaluation field is to engage in complexity thinking and complex adaptive systems (CAS). The CRIPA model promotes using the Social Analysis System approach to collaborative inquiry, a complex adaptive systems approach to evaluation. This approach provides methods for both duty bearers and children to “define, design and implement strategies that can support strengthening interconnections between the three domains” (Blanchet-Cohen et al. 2009 p. 16). The shift in child rights-based evaluation from linear models with a focus on child survival, to a more complex understanding of the interplay of a complex array of factors that influence child well-being, is more accurately measured using complex adaptive systems thinking.
Considering complex adaptive systems concepts (Williams, 2007) in relation to the CRIPA model, one can see how mapping the interrelationships between the three domains- the child, the intervention and the mandate- would provide a deep, highly nuanced understanding of the situation. Exploring differing perspectives of various actors involved in an intervention at all three levels, starting with the centrality of the lived experience of the child and their own perspective on their wellbeing, would lead to more meaningful indicators. Finally, being aware of boundaries that are drawn and whose perspectives are given more importance, especially in a context of vast differences in understanding of child rights principles, is key to evaluating programs based on child rights accountability.
In evaluating child focused programming, using a developmental evaluation (DE) approach and the CRIPA model are also compatible. The DE approach to evaluation is complexity sensitive and answers the call in international development evaluation for disaggregated, more context specific data. The ‘levels’ of developmental evaluation inquiry (Patton 2011, p. 120) closely align with the CRIPA model, DE begins with a focus on individuals in the center of the evaluation inquiry circle, the organizational system in the middle circle, and societal values in the outside circle. These spheres are similar to the CRIPA model with the child’s lived experience as the center focus of the evaluand, the intervention level in the middle ‘circle’ and the mandate; the legal, societal, or moral obligations that govern and guide those interventions on the outside circle.
For those of you who work in child focused organizations, I encourage you to actively consider how your organizations as duty-bearers can engage in evaluating how you uphold child rights.
I referred to Patton’s DE approach above. To read more about it, please see: Patton, Michael Q. (2011). Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use. The Guildford Press.
The American Evaluation Association is hosting International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to AEA365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.