AEA365 | A Tip-a-Day by and for Evaluators

TAG | triangulation

My name is John LaVelle from Louisiana State University.  The following is an interactive script developed to help my students and stakeholders inductively discover the value of triangulation.  The example is focused on inquiry methods, and can be easily adapted to other forms of triangulation.

Script: Let’s start from the premise that the inquiry process is supposed to help you (the stakeholder) learn in-depth information about something important.  A process, a product, an idea, the possibilities are endless.  An educational program, the process of new employee onboarding, clients’ awareness of new services, the aesthetic quality of my shoes, anything.  I selected this image to represent that thing:

lavelle-1

This image represents the complete truth of a phenomenon, and the borders represent the boundaries of the construct (which can’t really be known).  In this example, we will assume that this construct is orthogonal, so there isn’t any overlap between it and other concepts.  Assume we are not concerned with funding or politics (yet).

Now we study it with systematic inquiry (blue circle).   Let’s call the blue circle a survey, and let’s assume it gets right at the center of the thing we’re studying.  Everything within the circle represents increased understanding.

lavelle-2

Not too bad, huh?  We learned a lot of information about the phenomenon from just one study, and it doesn’t look like we got a lot of information from other concepts involved.  Every data collection tool will have some error (or similar concept), and that can lead to challenges later when you try to make sense of things.  Now, this survey did a good job, but it does not represent the entirety of the phenomenon.  Let’s try another to understand it some more using the same method (survey).

lavelle-3

Well, this looks different.  Not bad, just different.  We learned some new information that helps paint a fuller picture of the phenomenon, and that reinforces some ideas learned from the original study.

Note to facilitator: This can be a place to plant the idea of measurement error by drawing attention to where the circle goes beyond the boundaries of the square.

lavelle-4

Third time is the charm!  It seems like survey methods are painting a robust representation of the construct.  Every time we use a survey, we increase the validation and trustworthiness.  An observation: our inquiry methods seems to be grouping on the right side of the construct and a rather large region of the phenomenon is unexplored.  Could another approach will provide different information and explanation?

lavelle-5

It looks like a different method, such as interviews (orange circle) provided information that helped explain an unexplored region of the construct as well as helped reinforce the first and second studies.   That is interesting.  We should use at least one more approach to really pull things together, especially if we’re trying to learn about something that has direct implications for someone’s health, functioning, economic status, etc.

lavelle-6

Now things are really starting to pull together.  The central aspects of the construct were reinforced through five examinations using three methods, so it looks like the data are trustworthy and seem to be telling a reasonably consistent story.  Excellent work!

This was a lot of information, and there are some important implications here.  What could happen if:

  • You are only familiar with one approach to answering questions?
  • Your stakeholder(s) value one kind of information (quantitative or qualitative) over another?
  • Money for the study or evaluation is a concern? How will you prioritize?
  • One kind of data is more expensive to collect and analyze than another?
  • The inquiry methods aren’t well-refined?

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is John LaVelle from Louisiana State University.  It is my pleasure to lead a number of evaluation and applied methodology courses for graduate and undergraduate students.  All of my courses include service-learning and experiential learning components to help reinforce the learning objectives and provide operational/conceptual support for community partners.

A concept that that stakeholders and students alike seem to struggle with is triangulation.  My sense is that conversations on threats to construct validity and the advantages of triangulation for establishing trustworthiness tend not to be common in most organizations and households, and I struggled with finding a way to communicate this with students and stakeholders.  I imagined a responsive process to help my students explore triangulation using the upper limits of my art skills: squares, circles, and letters.  The following is the iterative script I used in a graduate course on qualitative and mixed methods.

Hot tip: This narrative seems to work best when the example is from your stakeholders’ experience, project, or something they find engaging.  The example I used in class was inspired by a student comment about selfies at football games the previous weekend.  Let your stakeholders take the example and run with it.  My experience is that their ownership of the example makes it “real” and can help stakeholders apply the concept to multiple areas of their work.

Hot Tip: Adapt this exploratory script to help illustrate any sort of triangulation.  Examples include professional discipline (e.g., education, policy, evaluation, social work, psychology, etc.), social science theoretical framework, inquiry methodology, data analysis, and reporting strategy.

Hot Tip: Have as much fun as you can with this example.  Trust your stakeholders or students to have the content expertise.  You, as the evaluator, bring the discovery process, grounding, and sense of humor.

In the spirit of humor lightness in discussing something very important, this image will be used to illustrate the example tomorrow.

lavelle-1

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We are Tayo Fabusuyi and Tori Hill, Lead Strategist and Research Scientist respectively of Numeritics, a research and consulting firm based in Pittsburgh, PA.

We conducted an evaluation of the Black Male Leadership Development Institute (BMLDI), a year-long program in Western Pennsylvania for high-school aged African American males. The BMLDI is designed to give participants access to Black male role models, provide opportunities for interaction within a supportive peer group, offer a challenging curriculum and equip the young men with leadership skills with a view towards positively impacting their perspectives and values.

Our evaluation strategy consisted of a mixed method, multi-phase approach with formative and summative components. In implementing the summative part of our strategy, we sought a framework robust enough to adequately capture how effective program activities were in achieving program goals, and to also provide insights on the structure and efficiency of those activities.

The framework that we employed was a modified form of Grove et.al’s EvaluLead Framework. The framework is premised on an open systems environment in which three interrelated forms of behavioral changes at the individual level are examined; “episodic,” “developmental,” and “transformative. These behavioral changes were analyzed using two forms of inquiry; “evidential,” or those measured using quantitative instruments and “evocative,” those assessed through qualitative tools.

This robust strategy has allowed us probe beyond program outputs to a more comprehensive framework that takes into consideration the broader influences that often affect program outcomes of this nature. The evaluation strategy also naturally lends itself to data triangulation, an attribute that helped reduce the risk of incorrect interpretations and strengthen the validity of our conclusions and recommendations made as regards program changes going forward.

Lesson Learned:

  • Given the myriad of factors that may influence program outcomes, the evaluation of programs similar to the BMLDI program are best carried out in an open systems environment. This also guarantees that the evaluation process will be flexible enough to make provisions for exit ramps in the evaluation process and to capture unintended outcomes.

Hot Tips:

  • An equally robust data gathering method is required to monitor progress made towards program goals and adequately capture program outcomes. We would recommend a 2 dimensional evaluation framework – evaluation type x data type.
  • For a behavioral change evaluation, goals should be focused on contribution, not attribution. The emphasis should be to show that program activities aided in achieving outcomes rather than claiming that program activities caused the outcomes.

RAD Resources:

The American Evaluation Association is celebrating Mixed Method Evaluation TIG Week. The contributions all week come from MME members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· ·

Hi, my name is Bikash Kumar Koirala. I work as a Monitoring and Evaluation Officer in the NGO Equal Access Nepal, which is based in Kathmandu, Nepal.  I have been practicing monitoring and evaluation work for over five years, which is focused on development communication programs.  A research project that EAN has collaborated on Assessing Communication for Social Change (AC4SC) developed a participatory M&E toolkit based on our experiences.  One of the modules in this toolkit is the Communication Module, which is summarized as follows.

As a result of AC4SC, the communication systems in our organization improved a lot and became more participatory. We began to understand that effective communication and continuous feedback is essential to the success of participatory M&E. Communication inside organizations and outside can be quite challenging sometimes because different people have different perspectives and experiences.

Lessons Learned

Community Involvement: After the AC4SC project, the level of engagement with communities by the M&E team increased considerably. Their involvement in ongoing participatory research activities and providing critical feedback has proved very useful to our radio program development. This has increased community ownership of our programs. As well as work undertaken by the M&E team, this research is conducted by network of embedded community researchers (CRs).  These activities have produced research data, which is analyzed and triangulated with the other sources of data (such as listeners’ letters) to produce more rigorous results.

Internal Communication: Regular constructive feedback related to program impact and improvement is given to content teams by the M&E team.  This has increased dialogue and cooperation between the M&E and content team members.  Before the AC4SC project, content team members didn’t usually take M&E findings into account because they felt that they already knew the value of the program content through positive feedback from listener letters. The value of M&E has now been recognized by the content teams. They now ask for more in-depth data to generalize feedback they receive. The M&E team addresses this through research and analysis using many different forms of data from varied sources.

Use of New Communication Technology: The M&E team has been analyzing SMS polls, text messages, and letter responses, and triangulating these with the CRs research data and short questionnaire responses to present more rigorous results to program team members, donors and other stakeholders.

Some Challenges: In participatory M&E it is important to understand the roles of everyone involved in the process. Effectively presenting results for better communication and the utilization of M&E findings among different stakeholders is an ongoing challenge. Time to effectively undertake participatory M&E and is also an ongoing challenge.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · · · · ·

Hi, I am Amy Germuth, President and Founder of EvalWorks, LLC and blogger at EvalThought.  So you are an evaluation consultant.  You evaluate programs/projects as part of an evaluation consultancy you own, co-own, or work for.  But how often do you evaluate your own business?

Hot Tip #1: Think logic model.  Do you have a business plan (in writing/on paper) that describes your goals/objectives, activities, outputs, outcomes, and impact? If not, create one – it could be really mind-blowing.

Hot Tip #2: Develop an evaluation plan linked to your logic model. Identify the evidence/data that  you will need to collect to formatively and summatively assess whether you are meeting your outcomes and outputs.  For example, if your goal is to write a winning RFP proposal, one activity may be “Respond to Evaluation RFPs”. Your output may be “Complete 4 RFPs” and your outcome may be “Improve the quality of my proposal submissions”.  What data do you need to collect to assess whether this is happening or has happened?  I can think of peer feedback, feedback from reviewers, comparison of your proposal to the wining proposal (if not yours), etc.

Hot Tip #3: Collect data – whatever you identified – and lots of it – related to costs/profits, proposal outcomes, media exposure, name recognition, etc.

Hot Tip #4: Review the data. What data triangulate? Which data are most reliable / valid? What can you conclude? Where do you need to do to make improvements? How will you improve –  Coursework? Self-teaching? Mentoring? Better marketing?

Hot Tip #5: Repeat – at least annually.

Click here to link to an article on creating effective dashboards. Develop one for your own company that tracks critical indicators of success to gain practice and then identify ways to incorporate dashboards into the work you provide clients.

The American Evaluation Association is celebrating Independent Consultants (IC) TIG Week with our colleagues in the IC AEA Topical Interest Group. The contributions all this week to aea365 come from our IC  TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.


· ·

Archives

To top