We are P. Antonio Olmos-Gallo, Kathryn K. DeRoche, and C.J. McKinney. We work at the Mental Health Center of Denver, a non-profit community mental health center which has become the de-facto mental health authority for the City and County of Denver. On any given day we provide services to about 4000 adults and 1000 youth. In addition to our duties associated with the collection and analysis of outcomes for the center, we also provide evaluation services to the multitude of Federal and local grants our center receives every year, which include not only treatment, but also prevention services to a multitude of individuals across Colorado. We have made a very concerted effort to involve multiple stakeholders in every evaluation we conduct: that includes youth, parents, adult consumers, and clinical and manager-level individuals.
In the last 5-7 years, a big part of our work has concentrated in the development of instruments to measure recovery from mental illness. Although we have degrees in psychology (either MA or Ph.D.), our training is not in clinical psychology, therefore we rely heavily on the expertise of multiple people for clinical interpretation of the data. We also teach graduate and undergraduate statistics and experimental methods at different colleges and universities in the State of Colorado. We believe this unique combination provides us with an edge when it comes to doing evaluation.
Hot tip: Do not short-change your evaluation efforts by trying to use techniques/tools that may not fully answer your questions. In our private practice, we sometimes have to step in to evaluate programs that never managed to answer the key questions because the techniques were not the most appropriate. Evaluators are sometimes afraid to use anything more sophisticated than a t-test or a chi-square, because “stakeholders do not understand statistics”. This takes us to our next hot tip:
Hot tip: Despite what you and your stakeholders may think, they can understand very sophisticated evaluation concepts if you give them enough background and there is willingness to learn (and to teach). During the last 5 years, our stakeholders have learned about Logic models, instrument reliability and validity, Item Response Theory (Rasch models), Hierarchical Linear models, Cost-benefit analysis, and more recently, Quality control charts. They may not believe it, and may not even accept it in public, but they are able to understand when an instrument is not working (and the importance of that), understand the power of predictive models, and the importance of using the right tools for improving day to day operations. More importantly, they also understand the limitations of some of these tools.
In 2009, we shared at AEA several examples of how we have managed to explain our stakeholders sophisticated concepts in evaluation and statistics in very intuitive ways. Please visit our website (http://www.outcomesmhcd.com/pubs.htm) to see that and many other examples of our work in evaluation of mental health.
This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.
Hi Patricia and Bob: Thanks for letting me know about the link not working! I’ve corrected it and it should work correctly now.
Hi Can you send the correct website link – the one posted does not work
Thanks!
Thanks for sharing these experiences. The web link needs to be edited to http://www.outcomesmhcd.com/pubs.htm.
Thanks for sharing these examples of engaging with evaluation users so they can understand different analytic techniques. The link to your webpage should be http://www.outcomesmhcd.com/Pubs.htm
Thanks all for the tips. I know that in most management consulting and OD contexts, firms will often shy away from anything more sophisticated than graphs and means (or, for the adventurous, t-tests and correlations). Although you’d typically find a small handful of individuals who may be able to follow along, most are usually puzzled. And to make matters worse, I have even heard stories of clients getting quite offended when I/O psychologists come in and talk “above” everyone else (i.e., by statistical-heavy discussions).
So, I do agree that one should not use methods that will not get them the answers that they want. But, the success of doing so, as you noted, depends upon the clients’ willingness to learn. Unfortunately, with certain populations (such as a group of business executives), they are typically unwilling to have anything additional put onto their plates. But, definitely–if you have an engaged population, I say go for it!