AEA365 | A Tip-a-Day by and for Evaluators

CAT | Behavioral Health

I am Dr. Trena Anastasia, an independent evaluator and principle at QDG Consulting.  In our office spring means grant writing and grant writing means pulling together logic models quickly.  In my practice, communication is essential and having a quick way to synthesize a client’s ideas, thoughts, goals, etc. in a logic model so I can determine how to help them move forward is critical.

My area of expertise is in Suicide Prevention and I have been a member of the ADAMH TIG for almost a decade, serving in a chair/co-chair capacity for three of those years. During that time I have attended multiple workshops on developing logic models, ideas for thinking through them, presenting them, and making them usable. I have found many of those methods useful at informing my process but nothing has changed the way I do them more than assembling them in PowerPoint.

Rad Resource: PowerPoint is much more user friendly and intuitive than most programs designed strictly for modeling. It may not be the best when you have multi-page models to consider, but when synthesizing a project down to a single page required for grant proposals, or to give a picture of the process to a stakeholder it works great. It also allows for:

  • Reviewing on a variety of platforms, and makes for easy sharing and editing by multiple stakeholders.
  • Drawing of arrows, highlighting of the section you are presenting on, even making it in the color scheme of your client’s logo.
  • Setting up and printing on multiple sizes of paper to fit the margins required by a grant, and it can be saved quickly as a PDF and imported into a Word file as an image.

To top it off, it is essentially free because if we have the Office Suite, we have PowerPoint.

I hope this tip is helpful as you work on your next logic model or proposal.

Anastasia The American Evaluation Association is celebrating Alcohol Drug Abuse and Mental Health (ADAMH) TIG Week. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

I’m Dr. Kathleen Ferreira, Director of Research and Evaluation at the Center for Social Innovation in Needham Massachusetts and one of the chairs of the ADAMH TIG. I’ve spent a number of years providing technical assistance (TA) in theory of change logic model development in systems of care serving youth with behavioral health challenges. A number of TA recipients have one goal in mind: Complete and submit a logic model to the funder and check that required task off their list. However, carefully crafted logic models can serve as an important roadmap to meaningful change and goal attainment for organizations. I’d like to share some insights I’ve gleaned from my work.

Lesson Learned: Organizations must “own” the process. It is not uncommon for organizational leaders to assume that I will develop their logic model and submit to them a completed product. However, it is more important that I serve as a facilitator of its development and that organizations work through the process of developing a shared understanding of who they are (vision, mission, values), their intended service recipients (population), where they need to go (goals and outcomes), and how they will get there (strategies). Interestingly, many TA recipients assume that they are in agreement until they begin to articulate these components.

Lesson Learned: Inclusion is critical to success. Create a logic model team that includes participants at all levels of the organization (or system), including service recipients. A group that is too large can make it difficult to move forward. I recommend a group of 5-10 people on the core logic model team. I also chunk out the process instead of trying to complete it in one session. A one-hour meeting every 1-2 weeks gives team members time to work through components and gather feedback from more stakeholders. Also, do not exclude a “Negative Nellie” from the team. Although the work may be challenging at times, their direct participation creates buy-in, neutralizing their ability to create discord when the logic model is implemented.

Lesson Learned: “The best way out is always through.” This Robert Frost quote is a good mantra for the team. Developing a meaningful logic model takes work, and the process can be frustrating, especially when balancing many different personalities and priorities. Offer encouragement. Acknowledge that this is a difficult process, but remind the team that it will pay dividends.

Rad Resources: For helpful logic model resources and examples, see the work of my former colleagues at the University of South Florida and the Kellogg Foundation’s logic model guide:

The American Evaluation Association is celebrating Alcohol Drug Abuse and Mental Health (ADAMH) TIG Week. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Greetings everyone! – My name is Roger Boothroyd and I’m from the University of South Florida. I have been conducting evaluations on the delivery of mental health services and programs for nearly 30 years, primarily working with government entities. It is my pleasure to introduce the Alcohol Drug Abuse and Mental Health (ADAMH) TIG’s week of contributions to aea365.

I know it is no news to any of you that evaluators often face a range of challenges evaluating behavioral health programs. For example, limited funds to support the evaluations, unrealistic timelines, competing political agendas among stakeholders, understanding the program theory of change and/or data issues to name just a few. This week several ADAMH TIG members are pleased to share with you some of the challenges they have encountered during their evaluation efforts and, more importantly, some of the resources, suggestions, and lessons learned that they have to offer to help minimize those challenges.

Lesson Learned: I’d like to begin by sharing one of my thoughts related to politics and evaluation and how the manner in which we present our findings can impact our clients. Evaluators always have the option of portraying findings as signifying the glass is half full or the glass is half empty. Take the following two statement for example:

  • 80% of the respondents reported that the services they received were effective in meeting their needs.
  • 1 of every 5 respondents reported that the services they received were not effective in meeting their needs.

Despite the fact that both statements are functionally equivalent, I think everyone can guess which one would make a better newspaper headline or require some detailed explanation at a legislative hearing.

Hot Tip: Understand that how you phase your findings has a potentially negative impact on your clients.

In working with government entities, I inform my clients that during our work group meetings and on our conference calls we’ll focus on the glass being half empty, because it is on those individuals who are not benefiting or succeeding where we need to focus our greatest attention. However, recognizing that our evaluation findings are a potential newspaper headline, I also inform my clients that when the final report is written and the public briefing is held the results will be presented as if the glass is half full but that when they hear or read this they should recognize the real important message– that the glass is also half empty and more work needs to be done.

The American Evaluation Association is celebrating Alcohol Drug Abuse and Mental Health (ADAMH) TIG Week. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Lija Greenseid. I am a Senior Evaluator, with Professional Data Analysts, Inc. in Minneapolis, MN. We conduct evaluations of stop-smoking programs. Smokers generally have lower education levels than the general population. Therefore, we want to make sure the materials we develop are understandable to smokers.

Rad Resource: Use a “readability calculator” to check the reading-level of your written materials. I have used this with program registration forms, survey instruments, consent statements, and other materials. Not surprisingly, the first drafts of my materials are often written at a level only grad students (and evaluators) can understand. With a critical eye and a few tweaks I can often rewrite my materials so that they are at an eighth-grade reading level, much more accessible to the people with whom I want to communicate.

A good Readability Calculator can be found here: http://www.editcentral.com/gwt1/EditCentral.html

It provides you with both a reading ease score, and a number of different measures of the US school grade level of the text.

This blog posting is rated at a high-school reading level. Do you agree?

The American Evaluation Association is celebrating Best of aea365 week. The contributions all this week are reposts of great aea365 blogs from our earlier years. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Greetings! I’m Robin Kipke and I work for the Center for Evaluation and Research at the University of California, Davis, which provides evaluation training and technical assistance to the 100+ projects which advocate for tobacco control policies in our state. To help these organizations discern how to effectively harness the power of social media, I’m partnering with the California Youth Advocacy Network to develop a handbook that explains what social media can do and ways to evaluate its use.

There is still a lot of discussion in the field about how to meaningfully evaluate these new media, but here are some ideas I’ve found to be informative.

Hot Tip: So many articles on evaluation focus on the various metrics or latest analytical applications that are available. However, evaluators know that the best measures in the world aren’t of much help without having a clearly-defined social media plan. In the Internet Advertising Bureau’s Social Media Measurement and Intent Guide Emily Dent says: “Launching social media activity, but not having any idea what you want to achieve is a little bit like having a map, but not knowing your destination.” The metrics have got to spring from measurable goals that lay out what you hope to achieve through social media.

Rad Resource: The Nonprofit Social Media Decision Guide has worksheets that can help define your purpose for using social media, identify the target audiences you want to reach, develop SMART objectives, and decide on the best new media channels to use.

Hot Tip: A great piece of advice from The Brandbuilder Blog is that in order to determine whether or not your forays into a particular SM channel are worth the cost and effort, evaluation should be activity-specific rather than medium-specific. For example, what was the return on investment (ROI) of shifting 20% of staff time from traditional educational outreach (developing fact sheets and meeting apartment managers) to generating buzz through Facebook and Twitter? Now you have a benchmark to compare your SM results against conventional ways of building public sentiment.

Rad Resources: I also like the following sites for collections of materials on social media: Mashable, Nonprofit Technology Network (NTEN), Occam’s Razor, Community Anti-Drug Coalitions of America (CADCA), National Center for Media Engagement.

Rad Resource:  This year at the 2011 evaluation conference, a number of sessions focused on social media. I gave a demonstration on Social Media’s Evaluation Power. Kurt Wilson and Stephanie Evergreen spoke about Evaluating Website Usage and Social Media engagement. To find these and other resources on this topic, you can also search the AEA Public eLibrary.

Rad Resources: To end with, I’d like to recommend a thought-provoking article by Matt Owen which makes the case that evaluating social media has to be more than metrics. After all, the very nature of social media is about relating to others….

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello, I’m Linda Cabral, a Senior Project Director from the Center for Health Policy and Research at the University of Massachusetts Medical School.

In an effort to more fully engage people from different cultural backgrounds and their communities in one of our recently-completed qualitative evaluation projects for our State’s Department of Mental Health, we employed the use of cultural brokers as members of our evaluation team. Cultural brokering has been defined as the act of bridging, linking or mediating between groups or persons of differing cultural backgrounds for the purpose of reducing conflict or producing change (National Center for Cultural Competence, 2004). In our case, we were seeking information from people with mental health conditions from specific population groups: Latinos and persons who are deaf or hard of hearing. We brought to our evaluation team cultural brokers who were people with mental health conditions and who were also members of the cultural groups we were interested in. They helped us develop our data collection instruments, led recruitment efforts, and participated in the data collection and data analysis phases.

Lesson Learned: The cultural brokers were able to establish a rapport and level of trust with study participants that would have been impossible to otherwise achieve. This rapport was important not only during the recruitment phase, but also during the data collection itself, thereby improving the quality of the data collected.

Lesson Learned: A barrier often cited with collecting data from non-English speakers is the need for interpreters. By using cultural brokers, participants were able to communicate as they felt most comfortable. Consider the use of cultural brokers when exploring sensitive topics with people from different cultural groups.

Lesson Learned: As the cultural brokers had little to no experience with evaluation work, it was necessary to build in time to educate the cultural brokers on evaluation basics. This helped to make our cultural brokers feel like a fully participating team member.

Rad Resource: The National Center for Cultural Competence (http://nccc.georgetown.edu/) has a host of resources to help programs design, implement, and evaluate culturally- and linguistically-competent service delivery systems.

Rad Resource: For those of you interested in using cultural brokers in the mental health field, the following article might be useful.

Singh NN, McKay JD, and Singh AN. (1999) The need for cultural brokers in mental health services. Journal of Child and Family Studies, 8(1):1-10.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Greetings. We are Bill Shennum and Kate LaVelle, staff members in the Research Department at Five Acres, a nonprofit child and family services agency located in Altadena, CA and serving the greater Los Angeles area. We work as internal evaluators to support outcome measurement and continuous quality improvement within the organization.

In our roles as internal evaluators we work with agency staff to develop data collection for key processes and outcomes, and assist staff in developing program improvement goals/activities. The quantitative and qualitative data included in our internal evaluation reports also supports other administrative functions including grant-writing, accreditation and program development.

Lessons Learned: In the course of this work we find it useful to incorporate data from our two primary funders, the Los Angeles County Departments of Children and Family Services (DCFS) and Mental Health (DMH). We use these data for a variety of purposes, such as to compare our agency’s outcomes to other service providers in LA County, establish benchmarks for child and program outcomes, and provide information on trends in the child welfare field to inform program development. Both DCFS and DMH make extensive statistical information available to the public on their websites.

Rad Resources:

1.       Los Angeles County DCFS (http://dcfs.co.la.ca.us/) provides clickable fact sheets on their “About Us” tab, covering everything from demographics and maltreatment statistics to placement trends and foster care resources. The site has many other reports including Wraparound performance summaries and individual group home compliance reports.

2.       Los Angeles County DMH (http://psbqi.dmh.lacounty.gov/) also makes statistical information of interest to evaluators available through its Program Support Bureau. The “Data Reports and Maps” link accesses countywide and area–specific demographic and performance data for child and adult mental health, including geographic information system mapping of mental health resources.

Southern California evaluators who work in child welfare and/or mental health will find much information of interest on the above sites. More outcomes and reports are added every year, so check back often.

 

Hot Tip: For those of you visiting Anaheim for the 2011 American Evaluation Association conference and interested in going to the beach, check out the surf at Huntington Beach pier in nearby Huntington Beach, about 10 miles from the headquarters hotel for the conference. This is centerpiece of Southern California’s original “Surf City.” It is a perfect place to take a break from the conference and check out the local beach scene.

The American Evaluation Association is celebrating this week with our colleagues at the Southern California Evaluation Association (SCEA), an AEA affiliate. The contributions all this week to aea365 come from SCEA members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Laura Cody, and I am a Community Health Specialist at the Regional Center for Healthy Communities in Cambridge, MA.  We work with many substance abuse prevention coalitions helping them build on their strengths and reduce underage drinking in their communities.  I will describe a process we used with a group of youth to develop an evaluation plan for their activities.

We started by asking the youth what comes to mind when they heard the word “evaluation”.  We then talked about the different types of evaluation (from needs assessment to outcome evaluation) and how it can be helpful in their work.  We discussed the need to plan for evaluation before the project starts to target the needed information.

Hot Tip: To guide this thinking, we developed three easy questions to ask about each project:

  • What would make this project successful?
  • How could we measure this success?
  • When will we collect this information?

Rad Resource: And we created a simple chart to enter this information:

Finally, we talked about a way to use all the information collected.  The group decided on a simple plus/delta chart, where the plus side listed things that went well with the project (including the evaluation process itself) and the delta side listed what could we change to do even better next time.

Hot Tips: A couple of lessons we learned as a result of this planning process:

  • There was a perception among the youth that evaluation is something that is done to you and tells you what’s wrong (e.g., like psychological evaluation).  It was important to recognize this and shift this thinking so they realize evaluation can be something you do for yourself as a source of empowerment.
  • Often there are too many successes (outcomes) identified and we needed a way to prioritize the list so that it is feasible to implement.
  • There was some discomfort with actually implementing the plan.  For example, the youth needed more coaching on conducting interviews and observations.
  • Also, it would have been helpful to designate an evaluation “asker” in the group.  The asker is someone who doesn’t necessarily have to do all of the evaluation but will ask at the beginning of every project: How are we going to evaluate this? He/she also reminds the group to review the results at the end.
  • While this process was designed for youth, we found it helpful working with groups of adults, too.

Rad Resource: You can see more details and examples of our process on our website called “Evaluation Planning Process”.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Stacy Carruth and I am a Community Health Specialist at a Regional Center for Healthy Communities in Massachusetts.  Much of the work we do is supporting community coalitions working on substance abuse prevention.  Two communities we work with are funded to reduce fatal and non-fatal opiate overdose, an issue of concern in the Northeast.  In one of these communities, we used Wordle to look at how people in the community were talking about overdose. I will be describing how we did this.

In their work to reduce opiate overdoses, the coalition conducted a comprehensive community assessment, which included many stakeholder interviews.  The stakeholders were asked:  What does overdose look like in your community? What’s being done and what’s working? What would help or harm the work (overdose prevention)? The coalition was interested in different ways to present this information to a larger, more diverse audience in a visually engaging way.

Rad Resource: I had recently learned about Wordle at the AEA/CDC Summer Evaluation Institute, and so I created a Wordle document with stakeholder interview transcripts as an example for the coalition staff.  Wordle allows you to create word clouds out of text. The more frequently a word is used in the text, the more prominent it is in the word cloud.  This creates a visual representation of the information that is easy to share with others.

Some of the words that were prominent in the Wordle document that we created were:  Need, Women, Public, Person, Detox, Overdose, Drugs.  It was a powerful way to visualize the thoughts of community members.

While using Wordle, I wondered whether it could make data more easily accessible for those with low literacy levels.  Wordle has the potential to engage members of the community that might otherwise not be engaged.

Hot Tip:  Wordle is very user friendly.  You literally copy and paste your text into a box.  You can change the orientation of the text, and the color scheme quickly and easily.  If there is a term that appears in the Wordle document that you want to delete (for example, in our document, the word “mentioned” was prominent, but it did not add to understanding the issue) you simply put your cursor on the word and right click and then you can delete the term. Wordle is an accessible way to share data and engage the community in your work.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

We are Michael Schooley, chief of the Applied Research and Evaluation branch in the Division for Heart Disease and Stroke Prevention at CDC, and Monica Oliver, an evaluator in the branch.

As public health evaluators, we often encounter the question of when a particular endeavor is ‘evaluation,’ when it is ‘research,’ and when it might be considered ‘surveillance.’ Evaluation, surveillance, and research are at once independent and complementary. A closer examination of the nuances of each provides food for thought for strategizing about how and when to employ them.

Hot Tip: A three-legged stool is a helpful metaphor for thinking about how evaluation, traditional research, and surveillance interrelate. Though different purposes drive each, the approaches converge to support our evidence or knowledge base.

We think of traditional research as a mechanism for exploring a concept, testing for causal links, and sometimes for predicting what will happen. Linear in approach, it typically involves stating a hypothesis, testing that hypothesis, analyzing any data around that hypothesis, and drawing a conclusion from that analysis.

Evaluation can be about program improvement, determining the impact or outcome(s) of a policy or program, or accountability and oversight. The process of evaluating also can be a journey of change and understanding in and of itself for participants. Circular in nature, evaluation continually loops back into a program, offering information that we might use to assess the merit of a new program, improve an existing program, or affirm a program’s effectiveness or adherence to a plan.

Surveillance identifies health problems, monitors conditions, and tracks outbreaks, equipping us to make decisions about when and how to intervene.

Like the legs on the stool, research, evaluation, and surveillance can stand in tandem, drawing from similar methodological approaches and distinctive principles to support and contribute to our knowledge base.

Rad Resource: A ten-minute audio presentation entitled Program Evaluation, or Evaluation Research? is available at http://www.cdc.gov/dhdsp/state_program/coffee_breaks/. Developed in the Division for Heart Disease and Stroke Prevention here at CDC, the presentation is modeled in the vein of AEA’s “coffee breaks.”

Want to learn more from Michael and Monica? They’ll be presenting several sessions this November at Evaluation 2010!

<< Latest posts

Older posts >>

Archives

To top