Hi, we’re Eva Bazant, evaluation staff at Jhpiego, and Vandana Tripathi, consultant to global public health programs. Jhpiego is an affiliate of Johns Hopkins University in Baltimore, Maryland, working to improve maternal, newborn and child health globally.
In many sectors, such as education, observations are used for professional evaluation. We are sharing lessons learned from an experience of using structured observation in evaluation of health care quality offered on the day of birth in low-resource settings (our experience was in Madagascar’s hospitals), carried out by the Maternal and Child Health Integrated Program of USAID.
Lessons Learned
- Build trust of the individuals being observed and the professionals in charge to allow the observation to happen. Rely on a respected senior colleague to negotiate entry for observers. Communicate clearly how the data will be used and kept secure, and to whom findings will be disseminated.
- Build in enough time to train and standardize observers’ competencies in observation; this can help identify potential challenges with the observation process and tools. Train observers to be a “fly on the wall” and stay long enough to allow employees to feel at ease and act normally, thereby reducing the Hawthorne effect.
- Use the shortest checklist/tool needed to cover important topics, to reduce error and fatigue; Validate the tool with topical experts prior to use, and pretest in the field.
- Create clear response categories to minimize ambiguity and need for interpretation by observers. Clarify for observers the distinctions between “not observed”, “not done” and “not applicable.”
- Interview your observers at the end, and communicate frequently during the process, to document how the observer tools were used. Review cases for completeness and discuss missing data.
- Use technology when possible (e.g., smart phone data entry) to increase efficiency in data entry. Ensure observers are comfortable using and maintaining the technology.
- Triangulate data from multiple sources to affirm and contextualize observation findings. Observation findings can be compared with interview or inventory data.
Lesson Learned Highlight – Improve validity and inter-rater reliability: During observer training, carry out one or more exercises to promote consistency of data. Have a trainer perform a complex service, omitting key steps or performing some mistakes, and have the observers record what they see. Compare the results to the “answer key” provided by the trainer. Look for common errors, and remediate with additional training of observers.
Many sectors and disciplines use observation in evaluation. We are interested to hear your experiences and comments regarding challenges and solutions.
Rad Resource – Handouts from Evaluation 2012: Our evaluation 2012 roundtable handout expands on this topic.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.