AEA365 | A Tip-a-Day by and for Evaluators

TAG | framework

Hello! We’re Judy Savageau and Laura Sefton from the Center for Health Policy and Research (CHPR) at the University of Massachusetts Medical School. We evaluators have in our minds all the steps required to complete each project, yet we must also consider quality improvement in our work as we think about our methodologies internally and our reports/stakeholders externally. LEAN thinking, a quality solution, can be translated quite easily to research and evaluation arenas. LEAN doesn’t necessarily mean that we reduce our work to the bare minimum in order to save on limited resources – it’s a way of thinking guided by REDUCING WASTE. To do this, LEAN promotes using Standard Operating Procedures (SOPs). SOPs can result when we consider each day’s tasks, notice where we duplicate efforts and could be more efficient, organize our materials to spend our time well, and use a checklist to help us remember both key steps as well as additional considerations.

As part of our own internal quality improvement efforts in the CHPR Research and Evaluation Unit, we’ve begun to develop SOPs for our day-to-day work. These include such topics as purchasing participant incentives, in-person data collection activities, and participant outreach.

Hot Tips:

  • Create Steps – Pull apart a recent project that you completed, jotting down each large ‘step’ from start to finish (e.g., literature review/environmental scan, data collection, IRB application, data analysis, report and presentation generation).
  • Focus In – Focus in on each ‘step,’ listing the various sub-steps in doing that one activity. For instance, within data collection, one might need to identify the sample and sampling methodology, develop recruitment tools to reach those identified, develop confirmation materials for those who agree to be part of our evaluations, identify location(s) for focus groups, sign consent forms, and provide incentives.
  • Review and Comment – Put together a review team of 2-3 colleagues who can comment on and edit these SOPs before wider dissemination to your working groups of colleagues.
  • Allow Iteration – Include a place on each SOP for users to provide comments and suggestions for updates – this process makes SOPs working documents that will allow for continuous quality improvement activities.

Lessons Learned:

  • Creating a template can provide a framework for developing an SOP, ensuring that the salient categories are included and the documents are somewhat standardized across a variety of topics.
  • Using reviewers who are not familiar with your projects can help to ensure your SOPs are clear and comprehensive. They can identify missing information or unclear steps.
  • Each SOP should be about a very focused activity – this keeps most of them to a one-page maximum in length and thus easy to use/implement.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi, we’re Kristy Moster and Jan Matulis. We’re evaluation specialists in the Education and Organizational Effectiveness Department at Cincinnati Children’s Hospital Medical Center.

Over the past year, our team has been engaged in the analysis of data from a three-year project with the Robert Wood Johnson Foundation focused on quality improvement training in healthcare. The data from the project includes information from surveys, interviews, knowledge assessments, observations of training, document analysis, and peer and instructor ratings of participants’ projects. Our task as a team was to pull all of the information together to create a clear, accurate, coherent story of the successes and challenges of quality improvement training at our institution. This work was also discussed as part of a roundtable at the AEA Conference in November 2011.

Hot Tip:

  • Create a visual framework. Guided by an example found in transdisciplinary science, we created a visual framework to represent the extensive data and data sources from the project, including their interconnections. Starting with the logic model, we identified a set of themes being addressed by the evaluation, and then matched individual survey items, interview questions, etc. to the themes. From there we created a framework to show connections between the data sources and themes. This framework helped to create a shared understanding of the data for our research team, some of whom were fairly new to the project when the analysis began. It also provided structure to our thinking and our work. For example, the framework helped us to ensure that all themes were addressed by multiple data sources and also to determine which data sources to target first for different phases of our analysis (in our case, those sources that addressed the most themes of highest interest).
Measurement and Analysis Framework

Partial Example of the Measurement and Analysis Framework

Lesson Learned:

  • Consistency is crucial. By this we mean that the interconnectivity of all instruments and items needs to be well thought out. This is especially difficult in a multi-year evaluation by a research team with membership changing over time. As new instruments are created it is important to understand the connections to other instruments and the relevant themes to enable later comparison and combining of the data.

Resource:

The American Evaluation Association is celebrating Mixed Method Evaluation TIG Week. The contributions all week come from MME members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· · ·

We are Tayo Fabusuyi and Tori Hill, Lead Strategist and Research Scientist respectively of Numeritics, a research and consulting firm based in Pittsburgh, PA.

We conducted an evaluation of the Black Male Leadership Development Institute (BMLDI), a year-long program in Western Pennsylvania for high-school aged African American males. The BMLDI is designed to give participants access to Black male role models, provide opportunities for interaction within a supportive peer group, offer a challenging curriculum and equip the young men with leadership skills with a view towards positively impacting their perspectives and values.

Our evaluation strategy consisted of a mixed method, multi-phase approach with formative and summative components. In implementing the summative part of our strategy, we sought a framework robust enough to adequately capture how effective program activities were in achieving program goals, and to also provide insights on the structure and efficiency of those activities.

The framework that we employed was a modified form of Grove et.al’s EvaluLead Framework. The framework is premised on an open systems environment in which three interrelated forms of behavioral changes at the individual level are examined; “episodic,” “developmental,” and “transformative. These behavioral changes were analyzed using two forms of inquiry; “evidential,” or those measured using quantitative instruments and “evocative,” those assessed through qualitative tools.

This robust strategy has allowed us probe beyond program outputs to a more comprehensive framework that takes into consideration the broader influences that often affect program outcomes of this nature. The evaluation strategy also naturally lends itself to data triangulation, an attribute that helped reduce the risk of incorrect interpretations and strengthen the validity of our conclusions and recommendations made as regards program changes going forward.

Lesson Learned:

  • Given the myriad of factors that may influence program outcomes, the evaluation of programs similar to the BMLDI program are best carried out in an open systems environment. This also guarantees that the evaluation process will be flexible enough to make provisions for exit ramps in the evaluation process and to capture unintended outcomes.

Hot Tips:

  • An equally robust data gathering method is required to monitor progress made towards program goals and adequately capture program outcomes. We would recommend a 2 dimensional evaluation framework – evaluation type x data type.
  • For a behavioral change evaluation, goals should be focused on contribution, not attribution. The emphasis should be to show that program activities aided in achieving outcomes rather than claiming that program activities caused the outcomes.

RAD Resources:

The American Evaluation Association is celebrating Mixed Method Evaluation TIG Week. The contributions all week come from MME members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· ·

Jun/12

18

John Branch on Concepts

Greetings from Ann Arbor! My name is John Branch and I am a professor of marketing at the Ross School of Business, and a faculty associate at the Center for Russian, East European, & Eurasian Studies, both at the University of Michigan. I am also in the midst of my second doctoral degree, and Ed.D. in educational leadership, also at the University of Michigan.

For several years I have been interested in concepts… the concepts which we use, how we articulate them, how we link them together. You see, concepts serve critical functions in science. First, they allow us to describe the cosmos. Indeed, concepts are the essence of reality, the basic unit of human knowledge. Second, concepts are the building-blocks of theory. We link them together in order to understand and predict phenomena. Consequently, scientists have an enormous stake vested in concepts.

Lessons Learned:

  • When concepts are undeveloped, therefore, science suffers. That is to say, when a concept is immature, its contribution to science, with respect to both its descriptive powers and its role as a building-block of theory, is limited. It is through concept development that scientists make progress in achieving their intellectual goals.
  • Many scientists, however, do not have a good handle on their concepts. They bandy them about with little concern for their development. Worse still, they often adopt their concepts blindly and uncritically, perpetuating this conceptual immaturity, and, in some cases, even allowing the concepts to calcify, thereby limiting unwittingly scientific progress.

Hot Tip:

  • Ask yourself how confident you are with the concepts with which you work. Have you simply adopted this concept from the others naively? Is the consensus on this specific concept actually a flashing warning light about the complacency in my discipline?

Resources:

  • Both the frameworks and philosophical discussion will serve you well, as you evaluate the concepts with which you work, and subsequently endeavor to raise their level of maturity.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

I’m Tayo Fabusuyi, lead strategist at Numeritics, a Pittsburgh-based research and consulting practice.

While advocacy has been around since humans were first able to give voice to different opinions, the evaluation of advocacy efforts is still very much in its infancy. One of the hallmarks of a nascent field is the absence of consensus on nomenclature and standards that most stakeholders subscribe to. This attribute is more pronounced in the advocacy evaluation space given the nature of advocacy efforts that often require the use of networks and coalitions, its emergent nature, multiple objectives from different stakeholders that may be mutually exclusive, the uniqueness and the context-specific nature of advocacy efforts, and the inability to attribute cause and effect in an open system that often characterize advocacy efforts.

Lessons Learned: As a result, advocacy evaluators need to foster a community of practice to aid in exchanging knowledge and in creating a body of work that documents what works, why and within what context. The learning process thrives best when we promote social interaction that facilitates the exchange of tacit knowledge, and when the body of evidence that comprises explicit knowledge is compiled across time, space and context. Advocacy efforts are nearly always unique, and insights from specific engagements may not be transferable to the next.

This is why it is imperative to have a repository of experiences across different contexts. The compilation may also provide opportunities that allow tacit knowledge to be converted to explicit knowledge. This affords the fungibililty that makes the insights and experiences gained from one specific advocacy evaluation effort to be transferable to a similar one.

Drawing from documented past experiences allows us to develop a conceptual framework within which advocacy evaluation studies could be analyzed, and compared. A modest goal of this framework is a catalog of successes, failures, methodology used, unintended outcomes and contexts to guide future advocacy evaluations. This initiative can establish a basis on which we can articulate common ground on advocacy evaluation and provide insights on how best to proceed in the face of remaining uncertainty. Sharing can accelerate learning.

Hot Tip: If you are an American Evaluation Association member, join the Advocacy and Policy Change Topical Interest Group (TIG). You can do so by logging into the AEA website and selecting “Update Member Profile” from the Members Only menu. If you aren’t an AEA member, I invite you to join AEA.

Hot Tip:  AEA members, take the next step and join the Advocacy and Policy Change discussion list (go to Members Only -> Groups/Forums Subscriptions) and contribute to vibrant conversations that can help build our community of practice.

We’re celebrating Advocacy and Policy Change week with our colleagues in the APC Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Archives

To top