MVE TIG Week: Military Evaluation Methods: Maintaining Rigor when Context Limits Design Choices by Stephen Axelrad and Lara Hilton

Hello! It’s Stephen Axelrad and Lara Hilton, the Co-Chairs of the Military and Veteran Evaluation TIG. We will be sharing experiences evaluating military programs where context can sometime constrain design. This information should be of interest to a broad group of evaluators who assess programs where the setting may dictate or limit the kinds of designs you can utilize.  In military evaluation, we work within highly controlled and structured environments. This can play in our favor when a commander requests metrics on a program and gives us carte blanche to utilize the most appropriate, rigorous, and feasible design and methods. The commander has authority to green light our work (along with the requisite military institutional review board approvals).

Hot Tips: Build Military Missions and Tempo of Military Operations into Evaluation Designs

While we enjoy this supportive environment some of the time, more often than not, key issues crop up that limit our evaluation approach, such as Operational Orders, clinical, and policy challenges. For instance, Operational Orders may transfer key leadership leaving the interest in evaluation in flux. It stands true for most of our work, that key stakeholders who are invested in the findings support successful evaluations, so ask for the outgoing leader to give a warm hand off the evaluation to the new leadership. As part of the Operational Tempo, program participants may be deployed in the middle of your evaluation. We are always prepared to utilize statistical methods like data imputation or intent-to-treat analysis in this work.

In the military behavioral health domain, the effectiveness of mental health programs such as treatment of PTSD is a major concern for the Department of Defense and Veterans Health Administration. Evaluating treatment programs where timing is an important component of a participant’s success disallows some types of design like randomized controlled or waitlisted controls. When a servicemember is willing to enter treatment, they ethically cannot be randomized to a control group or waitlisted. However, they can be used as their own controls using pre- post-test design.

In summary, while context matters in all evaluation, we who evaluate military programs remain cognizant that client mission trumps evaluation.

Lessons Learned: Leverage Existing Data.

  • When working with military or federal clients, leveraging their existing data is a way to open options for evaluation approach. You might use population data to understand and compare baseline and follow up data, create controls groups through propensity score matching, and analyze trends in existing data to contextualize your evaluation data.

These recommendations have been successful in our work and we hope they help you in yours. If you’re interested in learning more about issues in evaluation of military and veteran programs, please join our TIG via the AEA web site or contact us at lhilton@deloitte.com.

The American Evaluation Association is celebrating MVE TIG Week with our colleagues in the Military and Veteran’s Issues Topical Interest Group. The contributions all this week to aea365 come from our MVE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Leave a Comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.