Hi! My name is Natalia Kosheleva. I’m a consultant at Process Consulting Company, Moscow, Russia. Here I would like to share my ideas about measuring the extent of program intervention into targeted systems.
I base my work on two systems thinking principles. The Hierarchy principle says that any system is an element of some larger system, and any element is a system itself. The Multiple Description principle says that no single model can give an exhaustive description of a system, so several models should be developed.
I start with describing a program as a system that includes a source/sources of intervention and a target object/objects. For example, there is an NGO (source) working with schools (objects) to promote student community service.
Then I go through several levels of hierarchy to get a detailed description of targeted system(s), always getting to the level of individual “functions”. For example, a school has administrators, teachers and students, and students are organized into classes. At this stage it is helpful to do a lot of sketches.
Hot Tip: The effect of intervention depends upon what type of elements it affects and what share of elements of a certain type gets affected within a target system. So it is useful to track that. For example, a program might have trained 20 8th grade students from two schools, 10 from each. But one school has just one class of 18 people, so the intervention affected 100% of target classes and 56% of students. Another school has three classes of 30 students each, and out of 10 trained students 2 were from one class and 8 from another. So the extent of program intervention here is 67% of classes and 7, 27, and 0% of students in these classes.
Hot Tip: The effect of intervention also depends upon how deeply a program influences individuals. Many social programs try to create change through training. For them it may be useful to use a qualitative scale, like the one I used for a program training students on community service (CS): “Received training in CS”, “Received training and practiced CS”, etc. The scale, of course, should be program-specific.
A qualitative scale can be combined with quantitative indicators above. For example:
System/ Type of Element | Total number of elements | Received training
Number/ Share |
Received training and practiced doing
Number/ Share |
School 1/ Class 1/ Students | 20 | 10/ 56% | 1/ 5% |
School 2/ Class 1/ Students | 30 | 2/ 7% | 2/ 7% |
School 1/ Class 2/ Students | 30 | 8/ 26% | 4/ 13% |
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.