AEA365 | A Tip-a-Day by and for Evaluators

TAG | Public Health

Hello! I’m Julie Goldman, a librarian at the University of Massachusetts Medical School’s Lamar Soutter Library. I want to introduce you to a number of high-quality public health resources, most of them freely available, and all that can help you as evaluators understand and support your communities, your research, and your work. Public health information comes in many different forms from education materials, research articles, white papers, and policies to raw data on lifestyle characteristics, disease prevalence, healthcare utilization – or, basically, all scientific output that applies to day-to-day life.

Here are a few takeaways when looking for public health information:

 Lessons Learned:

  • Public health is multidisciplinary; it is not just doctors and nurses! Other fields include:
    • Health educators
    • Community planners and policy makers (e.g., local, regional or state health boards)
    • Scientists and researchers
    • Biostatisticians
    • Occupational health and safety professionals
  • Health promotion should happen everywhere – and include everyone!
    • Information should be available to all communities, domestic and international
    • Addressing health disparities should be a key focus. Examples include: healthcare access, infectious diseases, environmental hazards, violence, substance abuse, and injury.

Hot Tips: Evidence-based practices inform decisions:

  • Using the best available scientific evidence leads to informed decisions
  • Pro-active prevention can lead to measurable impact
  • Spending on prevention saves money long-term

Visit the American Public Health Association’s website for public health news, webinars and useful infographics like the one below that visually tell the public health story. Many of these (or similar ones from a variety of sources) can be used to help relay the messages of evaluation data.

Goldman 1

Public Health Infographic, © 2016, American Public Health Association

Rad Resources: The table below provides a brief overview of many of the public health collections that offer freely available resources on populations, agencies, public health news and policy briefs, and much more.

GoldmanLamar Soutter Library Evidence-Based Public Health Portal.
A collaboration of U.S. government agencies, public health organizations, and health sciences libraries highlighting news, public health topics, policies, jobs, and education.
The National Library of Medicine, National Institutes of Health, offer many public health resources such as Haz-Mat, Toxnet, and IRIS.
A free, digital archive of scientific research and literature.
Explore and evaluate projects aimed at reducing racial and ethnic health disparities. From the Robert Wood Johnson Foundation.
The world’s most comprehensive collection of population, family planning and related reproductive health and development literature.
An image-based review of world demographics and statistics.

 

Many libraries, at both public and private institutions, as well as public libraries can assist with evaluation and research, and access to all levels of public health information. Many librarians are highly motivated to work with a research and/or evaluation team to help navigate public health data and resources. Re-read these two blog postings from previous years to learn more about collaborating with a librarian to help explore the vast array of information that can help with the development, conduct, and analysis of evaluation projects: “Library Resources and the Important Role They Play in Evaluation Work” and “Today’s Librarian and Building an Evaluation Team.”

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

We are Leah Christina Neubauer and Suzanne Carlberg-Racich from Chicago. Neubauer is based in DePaul’s MPH Program and is President of the Chicagoland Evaluation Association (CEA).  Carlberg-Racich is a Visiting Assistant Professor in DePaul’s MPH Program.

We are both interested in evaluation-related coursework, curriculum, and culminating experiences in Master of Public Health (MPH) programs.  In the DePaul MPH Program, students are required to: 1) enroll in a 10-week evaluation course, 2) conduct evaluation throughout their 9-month applied experience, and 3) include evaluation in their culminating, capstone thesis project.

But, evaluation in the MPH program has been a journey that started WITHOUT an evaluation course.  Over time, evaluation has involved quite formally into the curriculum. Thus, we were interested to tell our evaluation-evolution-story at AEA 2013.  We led a Think-Tank Session: How Much Evaluation Is Enough? Evaluation Theory and Practice in a Master in Public Health (MPH) Program.   The session was attended by a small group of folks who were affiliated with public health, public policy, and social work disciplines.

Our post highlights some lessons learned, hot tips and rad resources from our session and ongoing work together in this area.  We look forward to contributing more information in the coming year.

Lesson Learned #1: Evaluation is quite relevant for public health. Evaluation is essential for public health practice, thus skills are expected and in high demand. This topic is quite applicable to a growing number of undergraduate and graduate public health programs which are charged with developing and implementing evaluation-related coursework in a Council on Education for Public Health (CEPH) accreditation-mandated public health field. 

Lesson Learned #2: Public health and evaluation theory need each other. Public health courses and curriculum need evaluation theory and principles. There is room to grow in this area – but how do we balance public health behavior change and evaluation theory in a limited amount of academic preparatory time? Or in the case of the DePaul MPH program, a 10-week evaluation course.

Lesson Learned #3:  “Live Evaluation Project” Enhances Classroom Learning. Students value the in-class live evaluation project as part of the 10-week course.  In the class, students are able to ‘conduct’ an evaluation as a class.  This learning also enhances their nine-month applied field experience.  By the time students graduate with an MPH degree, they will have completed at least two evaluation-related experiences.

Lesson Learned #4:  Public health and evaluation teaching literature can and should be expanded. Both public health and evaluation literature (particularly of the applied disciplines) can be enhanced with information on pedagogy, course design, culminating experiences and curriculum development. 

The American Evaluation Association is celebrating Chicagoland Evaluation Association (CEA) Affiliate Week. The contributions all this week to aea365 come from CEA members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Susan Wolfe and I am the owner of Susan Wolfe and Associates, LLC, a consulting firm that applies Community Psychology principles to strengthening organizations and communities.  Most of the evaluation work I do is with federally funded Healthy Start programs that are working to reduce disparities in infant mortality rates.

According to the Office of Minority Health, African Americans have 2.3 times the infant mortality rate and are 3 times as likely to die as infants due to low birth weight related complications when compared with non-Hispanic whites.

Now, be honest – what is your first reaction to this information?  What would be your first recommendations for intervention? Education? Interventions with the pregnant women?  Would you be surprised to learn that studies have ruled out genetics, behavior, and economics as explanatory factors for these disparities?  Would be you more surprised to learn that there is increasing evidence that the stress of racism may be a contributing factor? Whenever someone shows disparities between groups, it may immediately be interpreted that there is something that group is doing differently, which will automatically lead to an individual-focused solution to what might be a systemic problem.

Hot Tip: When you present statistics showing disparities between two groups, think about how they may be interpreted and used. If additional explanatory information is available, present it with the disparities data. Don’t just present the data alone, but present accompanying data about possible explanations for the disparities at multiple levels. Present both evidence that shows what does contribute to the disparities, and also evidence dispelling possible contributors.

Hot Tip: Use ecological models to promote thinking about potential contributors to disparities from a systems viewpoint and guide your audience to think about the issue at multiple levels.

Rad Resource:  The Social Determinants of Health Model, Life Course Theory, and other ecological models are useful tools to introduce when presenting health disparities data to illustrate the potential multiple levels of contributions to the problem and potential interventions.  The Centers for Disease Control and Prevention (CDC) web site has wonderful resources at this site.

The American Evaluation Association is celebratingCP TIG Week with our colleagues in the Community Psychology Topical Interest Group. The contributions all week come from CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org.

·

My name is Clare Nolan and I work for Harder+Company Community Research, a national consulting firm that specializes in evaluation.  For the past 27 years, we have helped foundations, government agencies, and nonprofits plan and evaluate programs and policies.

Accountable Care Organizations (ACOs) mark a new frontier in how healthcare is delivered in the United States.  ACOs create incentives for health care providers from different organizations to work together to treat individual patients across care settings.  Our firm recently conducted an evaluation of California’s first ACO which serves more than 40,000 members of the California Public Employees’ Retirement System (CalPERS), the nation’s second largest purchaser of healthcare services.

Lesson Learned:  The concept of Triple Aim is a key framework for evaluating the success of health reform.  However, it takes time for ACOs to develop the legal and analytical infrastructure necessary to support analyses of these data.  In the near term, evaluation can play a potentially more valuable role by providing formative feedback on the effectiveness of inter-organizational collaboration among ACO partners.  Our evaluation of the CalPERS ACO identified the following as core competencies for ACO success:

  1. Leadership and commitment.  Having executive leaders across partner organizations that are invested in the success of the ACO and demonstrate consistent levels of commitment.
  1. Accountability and governance.  Establishing inter-organizational governance systems that enable collaborative decision-making, promote accountability, and support communication.
  1. People and teams.  Staffed by individuals who are action-oriented, knowledgeable, and strategic, and managed by empowered leaders with a strong clinical background.
  1. Data and information technology.  IT systems that enable seamless data-sharing and information exchange that enables patient-level care coordination.
  1. Communication.  Honest, open, and transparent communication that supports learning and problem-solving across organizations.
Clipped from http://www.ihi.org/offerings/Initiatives/TripleAim/Pages/default.aspx

Rad Resource:  The surge of interest in ACOs prompted by health reform has resulted in an explosion of new literature.  We found the following articles helpful because they focus less on ACO development and more on implementation.

Hot Tip:  Evaluators can play a strong role in supporting transformations in the health care system that lead to expanded healthcare access and improved overall health status.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I’m Catherine (Brehm) Rain of Rain and Brehm Consulting Group, Inc., an independent research and evaluation firm in Rockledge, Florida. I specialize in Process Evaluation, which answers the questions Who, What, When, Where and How in support of the Outcome Evaluation. Field evaluations occur in chaotic environments where change is a constant. Documenting and managing change using process methods help inform and explain outcomes.

Lesson Learned: If you don’t know what or how events influenced a program, chances are you won’t be able to explain the reasons for its success or failure.

Lesson Learned: I’m a technology fan, but I’m also pretty old-school. Like Caine in the legendary TV show Kung Fu, I frequently conjure up the process evaluation ‘masters’ of the 1980s and ‘90s to strengthen the foundation of my practice and to regenerate those early ‘Grasshopper’ moments of my career.

Old-school? Or enticingly relevant? You decide, Grasshopper! I share a few with you.

Hot Tip:  Process evaluation ensures you answer questions of fidelity (to the grant, program and evaluation plan): did you do what you set out to with respect to needs, population, setting, intervention and delivery? When these questions are answered, a feedback loop is established so that necessary modifications to the program or the evaluation can be made along the way.

Rad Resource: Workbook for Designing a Process Evaluation, produced by the State of Georgia, contains hands-on tools and walk-through mechanics for creating a process evaluation. The strategies incorporate the research of several early masters, including three I routinely follow:  Freeman, Hawkins and Lipsey.

Hot Tip: Life is a journey—and so is a long-term evaluation. Stuff happens. However, it is often in the chaotic that we find the nugget of truth, the unknown need, or a new direction to better serve constituents. A well-documented process evaluation assists programs to ‘turn on a dime’, adapt to changing environments and issues, and maximize outcome potential.

Rad ResourcePrinciples and Tools for Evaluating Community-Based Prevention and Health Promotion Programs by Robert Goodman includes content on the FORECAST Model designed by two of my favorites (Goodman & Wandersman), which enables users to plot anticipated activities against resultant deviations or modifications in program and evaluation.

Hot Tip:  If you short shrift process evaluation, you may end up with Type III error primarily because the program you evaluated is not the program you thought you evaluated!

Rad Resource: Process Evaluation for Public Health Research and Evaluations: An Overview by Linnan and Steckler discusses Type III error avoidance as a function of process evaluation. As well, the authors discuss the historical evolution of process evaluation by several masters including but not limited to Cook, Glanz and Pirie.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi, my name is Susan Sloan. I’m a program evaluator with about 20 years of experience—first as an evaluation team leader for Duerr Evaluation Resources in California and now as in internal program evaluator for Whatcom County Health Department (WCHD) in the Pacific Northwest.

As a small local health department, we are always looking for ways to increase internal evaluation capacity without adding additional staff. If you’ve been reading the news lately, you’ll know that local public health resources are decreasing at an alarming rate. This makes it even more important that the programs we run are effective and that our remaining staff is trained to understand evaluation and to participate as part of an evaluation team when needed.

In order to improve organizational evaluation capacity, I’ve used the Centers for Disease Control and Prevention’s Framework for Program Evaluation in Public Health as a teaching tool (MMWR 1999; 48 (No. RR-11): http://www.cdc.gov/eval/framework.htm

When I first discovered CDC’s Framework, I was amazed at how well it mimicked the evaluation process I had used for years to evaluate school intervention programs. The best feature of the framework is that it is an easy-to-understand, easy-to-teach six-step process for evaluation. Here at WCHD, we used our Community Health staff meetings to teach the framework over a six-month period. In order to make the training come alive, we used examples from an in process evaluation of our Children with Special Health Care Needs (CSHCN) program along with a staff-created evaluation of a mythical public health trails infrastructure campaign. The culminating activity resulted in a short report that was written by staff.

Hot Tip: The first AEA Coffee Break focused on DoView®, a modeling software developed by Dr. Paul Duignan: http://www.doview.com/ We have purchased several copies of this software as a wonderful augment to our use of the CDC Six-Step Framework. We are now able to create evaluation models that work us through the framework. Our DoView® models include: (1) a program overview, including  overall goals and major program components, (2) a comprehensive listing of all internal and external stakeholders, (3) a flow chart of each major program component, (4) a logic model, (5) an evaluation design (including the evaluation mission, major questions, methods, assignments, and timelines), and (6) reporting of evaluation findings. All of this can be easily shared with team members or partners either through DoView files, pdf’s, or HTML documents.

The CDC Framework combined with the DoView® software has allowed us to create an evaluation toolkit that meets the ever-challenging needs for local health department evaluation capacity building.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· · ·

I’m Jim Burdine, Assistant Dean for Public Health Practice and Co-PI/Director of a Prevention Research Center at the School of Rural Public Health, Texas A&M. Over the last 30 years I’ve used community health status assessment as both a community organizing tool and intervention planning tool. In more than 200 different communities (including multiple iterations in the same community) I’ve presented/seen data presented from community assessments in a number of different formats with varying degrees of success and failure.

Lessons learned: What I’ve observed as most important, is presenting the data to them in a manner that matches the expectations of the audience – audiences, really. In other words, the groundwork you’ve already done in your assessment process (hopefully incorporating community-based participatory research principles) should dictate the format. If community members have been involved in planning, conducting and analyzing the data, they should play the major role in presenting the results. If they have been more passively involved, they may expect a “report” FROM you. Obviously the degree of “buy in” to the findings varies dramatically as a function of the degree of participation.

Given that starting place, the next challenges you face are: (1) the sheer volume of information you have to present, and (2) the variation in sophistication around understanding data within a community audience. So first, you have to accept that you can’t present EVERYTHING. You need to decide what are the key points you want to make and focus on those. We’ve all sat through a presentation where somebody reads us the demographics of a community or lists every chronic disease ever found in that population and an hour later you’ve learned nothing new. As a general rule I don’t both to report anything unless it is (1) statistically significantly different from some external reference point (e.g., Healthy People 2010, a state or national rate) and (2) unless there is something that could likely be done locally to impact that problem (it’s actionable).

If well-planned, you will have representatives from all community sectors in your audience (e.g., health care, business, elected officials, religion, education, the media, consumers and representatives of special interests/special needs groups). So you have to decide on what common denominators (e.g., educational attainment, exposure to health statistics) you are going to assume for your audience. You need to be comfortable with knowing that some aren’t going to understand everything you say and other are going to be bored with your “simplistic” presentation. Don’t make the mistake of trying to explain every point to each group n your audience. It just frustrates them and makes for a very ineffective presentation. Plan to do multiple presentations for different audiences rather than a “one size fits all” presentation.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· ·

Jun/10

16

Hazel Atuel on Community Competency

Hi!  My name is Hazel Atuel and I am the Research Program Director at the Comprehensive San Diego State University/UC San Diego Cancer Center Partnership. Today I will be sharing some hot tips and rad resources on community competency.

I am grateful to Dr. Bob Robinson, Associate Director for Health Equity of the Office on Smoking and Health, National Center for Chronic Disease Prevention and Health Promotion (CDC, Ret), for sharing his Community Model to Eliminating Health Disparities. Dr. Robinson’s theoretical framework is quite comprehensive and I would like to highlight the various dimensions of community competency, one of the major components of the model, as they will serve us well in the work we do.

To illustrate the need for community competency, let me share one of Dr. Robinson’s stories. When a CDC-funded REACH program in a northeastern state conducted a study almost a decade ago, data collection involved recruiting Cambodian refugees to participate in the project. As the study progressed, reports from the field reflected almost nil success in engaging participants from this target population. Only after someone provided a cursory review of Cambodian history did the program leaders understood where the resistance stemmed from: Asking people to “participate in a study” was one of the strategies the Khmer Rouge used to entice people from the city to go to the rural areas that led to the killing fields.

Had the researchers done their historical homework prior to collecting the data, recruitment strategies would have been very different. How then do we move forward as community competent evaluators?

Hot Tip: A first step is to differentiate clearly between community competency and cultural competency as the two constructs are not synonymous to each other. While cultural competency focuses on the individual, the unit of analysis in community competency is the community or group.  Second, the primary constructs of community competency are history, culture, context, and geography, and the secondary constructs are language, literacy, positive and salient imagery, multigenerational appeal, and diversity (Robinson, 2005, pp. 339-340). I refer the reader to the Rad Resource for in-depth reading even though some of these constructs are self-explanatory. I think it wise to generate a community competence checklist so we can be better equipped as evaluators for programs that serve diverse communities.

RAD RESOURCE: Robinson, R.G. (2005).  Community Development Model for Public Health Applications: Overview of a Model to Eliminate Population Disparities. Health Promotion Practice, 6, 338-346.

Robinson, R., and Holliday, R. (2009). Tobacco-use and the Black Community: A Community-focused Public Health Model for Eliminating Population Disparities. In R. Braithwaite, S. Taylor, and H. Treadwell, H. (Eds.), Health Issues in the Black Community, Jossey Bass: CA (pp. 379 – 416).

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· ·

I’m Tom Chapel, the Chief Evaluation Officer (acting) at the Centers for Disease Control and Prevention. I’ve also served as co-chair of the AEA/CDC Summer Evaluation Institute since its inception.  The Institute turns 10 years old this June and, with that in mind, I wanted to share a lesson learned and a couple of great resources for evaluators.

Lesson learned: Understand the difference between “primary” and “secondary” demand for a product or service.

  • Primary demand – milk is good for you… Got milk?
  • Secondary demand –buy [brand] milk because it is locally farmed/cheap/vitamin-reinforced etc.

This simple, fundamental marketing principle has import for evaluators.  The primary demand we draw from is the desire of programs to make an impact, understand their program, report out success.  Sometimes that secondary demand plays out as what we recognize as “program evaluation” but just as often, the relevant product/approach is performance measurement, quality assurance, or strategic planning.  Sometimes, I’ll be leading a leadership meeting for one of our programs.  I will be creating some simple logic models with them, but deploying those to affirm mission/vision and make some strategic decisions.  The word evaluation may not even come up.  But being present and involved in that conversation, and using key tools in my evaluator arsenal, I know I’m setting them up for strong evaluation later.  By reframing our thinking as evaluators so that we talk about organized reflection on a program—whether processes or outcomes—we reinforce the idea of continuous program improvement and the integration of planning, performance measurement , and evaluation.

Two useful resources from CDC:

Resource: CDC’s Framework for Program Evaluation, while originating within public health, is broadly applicable in many contexts and reinforces the idea of use of findings for program improvement. The Framework’s website provides a detailed explanation of the framework as well as multiple resources that support its implementation.

Resource: CDC’s National Center for Injury Prevention and Control just released Evaluation for Improvement: A Seven-Step Empowerment Evaluation Approach. This manual is designed to help violence prevention organizations hire an empowerment evaluator who will assist them in building their evaluation capacity through a learn-by-doing process of evaluating their own strategies. But any organization considering empowerment evaluation may also find it valuable.

Hot Tip: If you want to enhance your evaluation-related knowledge and skills, join over 500 of your colleagues at the AEA/CDC Summer Evaluation Institute. This is our 10th year; we welcome attendees from any discipline to Atlanta from June 13-16 for professional learning and networking.

· · · ·

Archives

To top