AEA365 | A Tip-a-Day by and for Evaluators

CAT | Human Services Evaluation

My name is Gary Resnick and I am the Director of Research at Harder+Company Community Research, a California-based consulting firm. My background combines program evaluation with child development research, and I have an interest in system theory and networks.

Harder+Company has been involved evaluating First 5 programs in a number of California counties. First 5 arose from 1998 Proposition 10, adding a tax on tobacco products with funds distributed to counties to fund local programs that improve services for children birth to 5 and their families. An important goal of First 5 funding is to act as a catalyst for change in each county’s systems of care. To measure system change, we focused on inter-agency coordination and collaboration. Increases in coordination and collaboration would indicate that agencies are better able to share resources and clients, reduce redundancies and service gaps, and increase efficiency.

Rad Resource: The Levels of Collaboration Scale assesses collaboration, has excellent psychometric properties and can be administered in web-based surveys to agency respondents. To see it in action, check out this article in the American Journal of Evaluation. Originally a 5-point Likert scale, we combined the two highest scale points creating a 4-point scale to make it easier for respondents.

Hot Tip: Start by defining the network member agencies using objective, clear, and unbiased criteria. Later, you can expand the network by asking respondents to nominate up to three additional agencies with whom they interact.

Hot Tip: Select at least two respondents from each organization, three is better, from different levels of the organization, administrators and managers as well as direct line staff.

Lesson Learned: It is important to have complete, reciprocal ratings for each agency (even if not from all respondents). If you have too much missing data at the agency level, consider excluding the agency from the network.

Hot Tip: Use Netdraw, a Windows freeware program, to produce two-dimensional network maps from agency-level Collaboration Scale ratings. See our maps here. The maps identify agencies most involved with other agencies at the center of the map (key players) and those least involved, at the periphery of the network. Add attributes of agencies (e.g. geographic region served) to map subgroups of your network.

Hot Tip: Produce two sets of maps, one with no agency labels for public reporting, and another with agency labels, for internal discussions with clients and agencies. Convene a meeting with the agency respondents and show them the maps with agency labels, to help them understand where they stand in the network and to foster collaboration.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · ·

Greetings. We are Bill Shennum and Kate LaVelle, staff members in the Research Department at Five Acres, a nonprofit child and family services agency located in Altadena, CA and serving the greater Los Angeles area. We work as internal evaluators to support outcome measurement and continuous quality improvement within the organization.

In our roles as internal evaluators we work with agency staff to develop data collection for key processes and outcomes, and assist staff in developing program improvement goals/activities. The quantitative and qualitative data included in our internal evaluation reports also supports other administrative functions including grant-writing, accreditation and program development.

Lessons Learned: In the course of this work we find it useful to incorporate data from our two primary funders, the Los Angeles County Departments of Children and Family Services (DCFS) and Mental Health (DMH). We use these data for a variety of purposes, such as to compare our agency’s outcomes to other service providers in LA County, establish benchmarks for child and program outcomes, and provide information on trends in the child welfare field to inform program development. Both DCFS and DMH make extensive statistical information available to the public on their websites.

Rad Resources:

1.       Los Angeles County DCFS (http://dcfs.co.la.ca.us/) provides clickable fact sheets on their “About Us” tab, covering everything from demographics and maltreatment statistics to placement trends and foster care resources. The site has many other reports including Wraparound performance summaries and individual group home compliance reports.

2.       Los Angeles County DMH (http://psbqi.dmh.lacounty.gov/) also makes statistical information of interest to evaluators available through its Program Support Bureau. The “Data Reports and Maps” link accesses countywide and area–specific demographic and performance data for child and adult mental health, including geographic information system mapping of mental health resources.

Southern California evaluators who work in child welfare and/or mental health will find much information of interest on the above sites. More outcomes and reports are added every year, so check back often.

 

Hot Tip: For those of you visiting Anaheim for the 2011 American Evaluation Association conference and interested in going to the beach, check out the surf at Huntington Beach pier in nearby Huntington Beach, about 10 miles from the headquarters hotel for the conference. This is centerpiece of Southern California’s original “Surf City.” It is a perfect place to take a break from the conference and check out the local beach scene.

The American Evaluation Association is celebrating this week with our colleagues at the Southern California Evaluation Association (SCEA), an AEA affiliate. The contributions all this week to aea365 come from SCEA members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Bonnie Richards, and I am a Research Associate with Vital Research, an evaluation and consulting company located in Los Angeles, California. The Social Services sector is among the industries we serve when designing and implementing evaluation projects.

Lessons Learned: So, what frames a Social Services evaluation project? Consider how values and valuing impact the following.

 Perspective                     

  • Many different stakeholders may be involved at some point, as either sources of data or as informants for conducting the project itself.
    • The agency or service provider itself interested in an evaluation.
    • The funders or foundations are now interested in the impact of their funding.
    • Program staff, service recipients and their families, community members, policy makers and others may be involved.
  • The purpose and value of conducting an evaluation may not be clear. Be prepared to explain your work to different audiences.

Budget

  • It may not come as a surprise that budget is hugely influential.
    • Internally, budget conservatively and realistically (or you may find yourself volunteering some pro bono services). What are the critical components that must be included in the project?
    • External Budgets(e.g. state or federal) and their fluctuations (e.g. cuts or increases) can have a huge influence on your work.
  • Social Service organizations might be under a severe amount of stress during times of funding crisis. This can strain communication, involvement, and cooperation.

Data

  • Do your funders value and understand data? Why are they collecting it? Are they a learning organization that is self-motivated to understand impact, or are they simply meeting a requirement from higher up?
  • Are agencies tracking data? They might need help developing a system for tracking information of interest. Consider what kinds of data you may be able to obtain:
    • Indicators? (e.g. attendance or other numbers)
    • Outcomes? (i.e. an observable difference in attitude or behavior)
  • Consider how findings will be used. What are the potential implications of making a judgment about the overall merit or worth of a program?

Lesson Learned: There are several adjectives the evaluator should endeavor to embody, particularly in the context of a Social Services evaluation project: Conscientious, Intentional, Open-minded, Flexible.

Hot Tip: If you work with stakeholders who are unfamiliar with research and might have difficulty interpreting data, consider the ways you can relay findings in meaningful ways. Consider attending a workshop or session on new and creative visual displays that can be used in presentations and reports for clients.

 

Rad Resource: Orient yourself to any major evaluation theorists before attending a session with them. Evaluation Roots: Tracing Theorists’ Views and Influences, edited by Marv Alkin, is a great resource for understanding different theoretical approaches, including Valuing, this year’s conference theme.

 

The American Evaluation Association is celebrating this week with our colleagues at the Southern California Evaluation Association (SCEA), an AEA affiliate. The contributions all this week to aea365 come from SCEA members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Ariana Brooks and I am the Director of Evaluation, Research and Planning for HeartShare Human Services.

Lesson Learned: When I started as an internal evaluator my supervisor, Stan Capela, stressed to me one main point: evaluation does not solve management problems. My initial reaction was it made sense and remember similar issues discussed in graduate school. I did not fully grasp the meaning until I was performing my job responsibilities. Specifically, each report was producing similar results. At first I was naively shocked at the level of resistance from some managers. We were well versed in Patton’s Utilization Focused approach. So we focused on providing meaningful reports, but there was resistance even though we would repeatedly tell managers the “numbers don’t lie”.

Lesson Learned: As a social psychologist, I reflected on various theories that helped explain their behavior. Of course, people will interpret stimuli based on their own perspective. People are motivated to preserve a positive sense of self and are more resistant to counterattitudinal messages, especially if they are highly invested in the issue (e.g. their job). So it made sense that when an internal audit illustrates program’s deficiencies have more to do with supervision or program administration it can be hard for management to swallow.

Although it is frustrating when management’s resistance to change can reduce the utility of evaluation work, it is fascinating to see how the theories I studied play out in an organization. Borrowing from evaluation and social psychology theories, here are some tips that helped me combat and understand resistance:

  • Hot tip: Think about the source of the message, or evaluation results. The source should be respected, seen as having expertise, trusted and viewed as an in-group member (someone also invested in the program or in a similar role).
  • Rad Resource: The appreciative inquiry approach to evaluating programs has been met with great success. Managers are more willing to be involved and use evaluation results when they carry a more positive tone. Focusing on management’s strengths to overcome program challenges has proved to be a more useful approach. A great resource online is: http://appreciativeinquiry.case.edu/
  • Hot tip: Avoid any language that seems targeted towards certain individuals, roles or positions. Make the responsibility of overcoming challenges a group effort, including the evaluator.
  • Hot tip: Take a sign of defensiveness as a positive. Often this is a sign that staff is truly invested in the program and their work. Directing this energy toward more productive means can be a bit of a struggle but be rewarding in the long run.

 

The American Evaluation Association is celebrating GOV TIG Week with our colleagues in the Government Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our GOV TIG members and you can learn more about their work via the Government TIG sessions at AEA’s annual conference. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Susan Fojas. As the Associate Commissioner for Performance Measurement, Monitoring, and Improvement at New York City’s Administration for Children’s Services (ACS), my goal is to improve the quality of services and outcomes achieved for children in foster care through evaluation and monitoring.

Hot Tip: A valuable part of the evaluation process is a monthly forum hosted by the advocacy organization to which most foster care providers belong, the Council of Family and Child Caring Agencies. The forum allows valuable discussion of evaluation data in the context of practice issues among provider agencies, ACS, and the New York State Office of Children and Family Services. Discussion focuses on sharing performance improvement strategies and strengthening the evaluation system.

Lesson Learned: Productive discussions occur when we can dig into practice issues that inform what the evaluation data mean. This happens while interpreting exploratory data in the development of a new measure or reviewing results of the past year. For instance, in looking at our shared ability to place siblings together in foster homes, we explored patterns of sibling placements in a way that we hadn’t seen before. Sharing system-wide data that individual providers cannot normally access allowed them an expanded perspective beyond the experience of their agency, showing how placements differed across providers. We were able to have an informed discussion of the practice that impacts the data – the placement process, the experience for siblings, and barriers to placing siblings together – and how to develop a measure of performance that acknowledges where practice is and where we want it to go. Providers could then develop individualized approaches to sibling placements to achieve the goals we set for the system. Often, we also identify areas of system-wide challenges that require the public and private sides to partner in creating changes. This can have beneficial impacts on practice at the provider and system-wide levels.

Lesson Learned: Of course, discussion also centers on common limitations inherent in evaluation. We deal with limits in how legacy data systems, not originally designed to produce performance indicators, can be used for evaluation. We encounter unique situations in complex foster care cases that fall outside the bounds of what standardized evaluations can capture. We deal with how evaluation of a system serving 15,000 children can still run into small sample sizes when measuring specific areas of practice. Addressing these issues, we acknowledge ones we can’t change and evaluate reasonably, but in the best of worlds we move past these issues and focus on discussions that strengthen practice. The communication can sometimes be tough, but we are better as a system for it. The evaluation becomes stronger, which gives us the information we need to improve and move forward as a group.

The American Evaluation Association is celebrating GOV TIG Week with our colleagues in the Government Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our GOV TIG members and you can learn more about their work via the Government TIG sessions at AEA’s annual conference. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Caroline DeWitt. I am a senior evaluator with Human Resources and Skill Development in Ottawa, Canada, and a member of the AEA Government Evaluation TIG. Our department is responsible for several evaluations ranging from labour market outcomes to grant and contributions programs delivered through third parties.

An area of my responsibility was undertaking an impact evaluation of an initiative delivered through a third party, the not-for-profit sector. The federal government and the voluntary sector found new ways to come together to achieve common goals through the Voluntary Sector Initiative.

Our challenge was how to partner and collaborate with the voluntary sector on a federal government/voluntary sector impact evaluation. It is usually the case that federal government evaluations are independent, conducted at arm’s length working with internal stakeholders.

Here are three activities that led to a successful outcome.

Hot Tip One – Build trust with key sector representatives and keep them informed throughout the process. We formalized the process and established a Joint Evaluation Steering Committee (JESC) to oversee the impact evaluation of the initiative.

Hot Tip Two – Achieve consensus on evaluation methodology by providing a forum where stakeholders participate in discussions on the evaluation design, data, indicators and expected outcomes of the Initiative. For example, we convened an international conference that was entitled: “Measurements of Partnerships” where stakeholders were given an opportunity to dialogue and share ideas.

Hot Tip Three – Ensure transparency throughout the process with an effective governance framework. We scheduled ongoing JESC meetings to ensure that all parties received the evaluation information including draft reports. Stakeholder concerns were addressed and their comments were incorporated in the reports.

Buy-in at the start of the process led to insightful comments. Ongoing engagement was also important throughout the process. A final report was signed off by all key stakeholders.

The American Evaluation Association is celebrating GOV TIG Week with our colleagues in the Government Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our GOV TIG members and you can learn more about their work via the Government TIG sessions at AEA’s annual conference. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello! My name is Sudharshan Seshadri and I am currently pursuing my Masters degree in Professional Studies specializing in Humanitarian Services Administration.

I started to realize that “data” is being considered as a most promising feature to understand the activity of evaluation. To abstract the data needs, I believe as an evaluator, we should be conscious to explore the resources available in all forms of ubiquitous information.

I would like to share a few resources that are promising to beginners in the conduct of evaluation. For the purpose of ease of use, I shall classify the resources under three headings:

Rad Resources for Program Planning

1.      The Ohio State Evaluation Bulletin, Extension – A systemic approach to design and plan program evaluations. (http://ohioline.osu.edu/b868/)

2.      Program Planning – Program Development and Evaluation. PD & E). UWEX. (http://www.uwex.edu/ces/pdande/planning/index.html)

3. Planning a Program Evaluation: Worksheet (Co-operative Extension)

(http://learningstore.uwex.edu/assets/pdfs/G3658-1W.PDF)

4.      Evaluation design checklist, Daniel L.Stufflebeam, The Evaluation Centre,WesternMichiganUniversity. (http://www.wmich.edu/evalctr/checklists/)

5.      Key Evaluation Checklist (KEC), Michael Scriven. (https://communities.usaidallnet.gov/fa/system/files/Key+Evaluation+Checklist.pdf)

Rad Resources for Program Implementation, Monitoring, and Delivery

1.      W.K.Kellogg Foundation. Evaluation Handbook. (http://www.wkkf.org/knowledge-center/resources/2010/W-K-Kellogg-Foundation-Evaluation-Handbook.aspx)

2.      Program Manager’s Planning, Monitoring and Evaluation tool kit. Division for Oversight Services. Tool number 5. (http://www.unfpa.org/monitoring/toolkit.htm)

3.      Evaluation Models. View Points on Educational and Human Services Evaluation. Second Edition. Edited by Daniel.L. Stufflebeam, George.F. Madaus & Thomas Kellaghan. (http://www.unssc.org/web/programmes/LS/unep-unssc-precourse-material/7_eVALUATIONl%20Models.pdf)

Rad Resources for Program Utilization

1. Utilization – Focused Evaluation. Michael.Q. Patton. Fourth Edition. Sage Publications.

2.      Independent Evaluation Group. (IEG) The World Bank Group. Improving development results through excellence in evaluation. (http://www.worldbank.org/oed/)

3.      My M & E – A platform for sharing knowledge and practice amongst M & E practitioners worldwide. (www.mymande.org)

4. “Evaluate”, Evaluation centre operated by Western Michigan University, Specializing in National Science Foundation (NSF) Evaluations. (www.evalu-ate.org)

5. United Kingdom Evaluation Society.(UKES) Resources/Evaluation Glossary/ (http://www.evaluation.org.uk/resources/glossary.aspx)

Lessons Learned: Always initiate the search for data needs. In the information age, we have plethora of evaluation services in execution all over the world. Data acts a gateway to useful and significant research practices carried out in the profession of evaluation. Clearly, I accord benchmarking as an outcome of consistent resource search and utilization.

Hot Tip #1: How long can you stare at the Google search engine screen for your data needs? Expand your search through a multitude of web resources.

Hot Tip #2: Use networking to get instant responses to your queries. It allows you to create a new dimension for your learning and practice methods. For example, I created a separate page named “The Evaluation Library” for my books, references and tools in Facebook that I use frequently in the evaluation context.

Hot Tip #3: Ease of data access penetrates your interest to dig deeper. Stack or list all your resources in a platform that you visit frequently.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

My name is Stanley Capela. I am the Vice President for Quality Management at HeartShare Human Services of New York and the current Chair of AEA’s Government Evaluation TIG.

Major Hot Tip: If you attend the AEA Conference this November, attend our business meeting where we will celebrate 20 years of Government Evaluation. Joseph Wholey as well as the past and current chairs of the TIG will discuss the evolution of Government Evaluation. It not only will be a very thought- provoking discussion, but more importantly will provide a fascinating ride as we look back, share thoughts on the present, and look into the future.

Lessons Learned – GOV + CA/ COL = QE* : After 32 years working in the field, one aspect that I have enjoyed the most is serving on a committee that includes a government agency and contract agencies.  I have been fascinated when a government agency convenes agencies to foster a dialogue to develop performance measurement systems for  those very same agencies. First, there is a problem of trust since you are asking stakeholders to come up with a system to evaluate their own performance. Second, the government representative may not have the power to implement any of the recommendations made by the group. Finally, there are those who may conclude it is a useless exercise because the issues that are raised by the group are never resolved for a variety of reasons.

What I have found recently is, when the process does work well, it encompasses a number of key ingredients that include:

  • Understanding among the participants that the ultimate goal will be achieved as a result of the collaboration.
  • Mutual understanding of what will be accomplished, that it is measurable, and that there are clear definable indicators.
  • Identification of roles by the government entity, along with an honest discussion of what is and what is not negotiable.
  • Follow through on the recommendations once consensus is reached.
  • Explanation when a recommendation is not doable, so that there are no misunderstandings of doing something that is the exact opposite of what is agreed to by the group.
  • A trial period to test recommendations, and recommendations are implemented well in advance of the funding period so agencies have time to implement internal evaluation systems to monitor program performance.
  • Access to information on best practices that is shared among the various entities.

In the end, when you foster an environment that encourages honest dialogue, it offers the opportunity to create a performance measurement system that not only ensures quality services, but also best meets the needs of the individuals served by these agencies.

*Government  + Contracting Agencies/ Collaboration = Quality Evaluation

The American Evaluation Association is celebrating Government Evaluation Week with our colleagues in the Government Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our GOVT TIG members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting Government-focused evaluation resources. You can also learn more from the GOVT TIG via their many sessions at Evaluation 2010 this November in San Antonio.

No tags

My name is Virginia Dick and I am currently public service evaluation faculty at the Carl Vinson Institute of Government at the University of Georgia. Most of my work focuses on assisting state and local government agencies, and other university divisions, with evaluation of programs, policies and systems.

As part of my role I often find myself working with a wide range of individuals with different backgrounds, perspectives, purposes, and information assessment styles. It has been important to find ways to help different groups examine and understand relevant evaluation data using a wide range of mechanisms.

Most recently, I have begun working with our state child welfare agency to use GIS (Geographic Information Systems) methods to examine child welfare client characteristics and outcomes spatially through mapping. Often key stakeholders (community members, agency leadership, and social work students) have expressed new and interesting perspectives and interpretations of the data when it is portrayed via mapping rather than in traditional charts and tables.

Rad Resource: ESRI (http://www.esri.com/) often provides free training and educational opportunities to work with their mapping software and may be available through some universities.

There are many open source software options out there, some of which I am currently working with at the University of Georgia Information Technology Outreach Service to explore with my current project. A list of open source options is available at: http://gislounge.com/open-source-gis-applications/

Hot Tip: When working with a group reviewing the data and relationships between variables, start with a few layers and options on the map and slowly build and add additional components as the individuals start to become more comfortable talking about the relationships between the different variables.

Hot Tip: By looking at census tracts as units it allows groups to discuss the relationship between variables without having to dig down to the individual street address level which can become much more complicated when compiling the maps. Often analysis at the census tract level can be most beneficial to communities and government agencies rather than the individual street address level.

Hot Tip: Let the stakeholders generate the ideas and discussion among themselves to get the richest information about the perceived relationship between variables. This is particularly useful when looking at small units such as counties or smaller (with the mapping done at the census tract or block level).

Want to learn more about Virginia’s work using GIS? Come to the poster exhibition on Wednesday evening in San Antonio this November for AEA’s Annual Conference.

· · · ·

Archives

To top