I am Len Bickman, AEA Past-President, Professor Emeritus, Vanderbilt University, and currently Research Professor, Center for Children and Families, Florida International University and President, Feedback Research Institute. Today, I am offering lessons from a quarter-century old evaluation widely known as the Fort Bragg Evaluation.
I examined the effectiveness of mental health services delivered in a coordinated system of care for youth. Why Fort Bragg? Dr. Lenore Behar, who spearheaded the effort to both implement and evaluate system-of-care, was then head of children’s services for the State of North Carolina, where Ft. Bragg is located. Funding for was not easily obtained, but a Congressman, who was chair of important committee that oversaw some military operations, was convinced by Dr. Behar to order the Army to fund the demonstration and evaluation in part because youth mental health services was the fastest growing expense for the Army civilian military dependent insurance program at the time. But without political influence the study would never have happened.
The Fort Bragg program was compared to a civilian program in Stark County, Ohio. They took place in both military and civilian settings using both quasi-experimental (Fort Bragg) and randomized controlled trial (Stark County) designs. The programs differed in maturity level, types of clinicians, and characteristics of clients but tested the same program theory. While the Army did not allow a randomized design within Fort Bragg, we did have the best matched control sites at two other Army bases, which drew from the same population of families, whose job descriptions were the same and wore similar clothes to work.
What is the relevance of these evaluations to today’s evaluators?
The evaluations were highly visible in spite of null results – close to 80 articles, books, book and chapters were produced. There were special issues of the American Psychologist. These evaluations together won the AEA Outstanding Evaluation Award and received widespread recognition as exemplary.
Lessons Learned:
- Be a science advocate. I advocated for the scientific merit of the evaluation and not specific outcomes of system of care.
- Establish continuity. I was able to link two major evaluations testing the same program theory, in very different contexts, that added to the credibility of the results.
- Evaluate program theory not the program. I was able to differentiate theory failure (system-of-care), program failure (implementation) and evaluation failure (poor design, faulty analysis).
- Focus on client outcomes. Previous evaluations had studied system level variables and costs but not child and family outcomes.
Rad Resources:
- Fort Bragg Behavioral Health in 2020
- The Fort Bragg Evaluation (long-form commentary)
- The Fort Bragg Evaluation: A Snapshot in Time
- More of What? Issues Raised by the Fort Bragg Study
The American Evaluation Association is celebrating Memorial Week in Evaluation. The contributions this week are from members of the Military and Veteran Evaluation TIG featuring contributions to evaluation with military origins but relevant to all we do. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.