AEA365 | A Tip-a-Day by and for Evaluators

CAT | Theories of Evaluation

Hi Everyone! My name is Cherie Avent, and I am a second year Ph.D. student at the University of North Carolina Greensboro with a focus on program evaluation and research methods. I have been fortunate to work on diverse evaluation projects in which the faculty allow students to lead and select the theory that would serve as a guide for our purpose or aim. However, recent discussions in classes and with peers have centered on knowing self and the connections to theoretical orientation. I realized I had been working on evaluation projects without fully considering my own beliefs/values and the theoretical orientation from which I want to work. As a result, I was unaware of how my beliefs/values affected the evaluation designs, processes, and interactions with stakeholders.

Many scholars argue the need for critical reflection on these topics, but I wonder, how many of us do it. Particularly for novice evaluators, can we articulate who we are, what we believe/value, the role we serve, how knowledge is constructed, and other worldviews? Are we aware of how these answers shape our theoretical orientation? Are we able to articulate our theoretical orientation? Answers to these questions frame our approach and methods. The AEA Guiding Principles for Evaluators emphasizes the importance of self-reflection and being explicit about the role one’s beliefs play in the conduct of evaluation.

Lesson Learned: Begin self-reflecting early
It’s important to spend time reflecting on one’s beliefs and values because they show up in every aspect of our work. The reflection can begin with questions such as, who am I? What do I believe/value? How do my personal and professional experiences affect me as an evaluator? Then move into more complex questions: Why am I doing this work?  What do I believe the role of an evaluator is and what would I like my role to be? How do I believe knowledge is constructed? I am now starting to explore these questions, and I invite you to do the same.Hot Tip 1: Develop a small group/network to share your thoughts, dilemmas, and difficulties as a way to work through these questions. By dialoguing, you can help each other in understanding, clarifying, and expanding perspectives. More specifically, it enhances our ability to express our theoretical orientations to others verbally. The interactions might occur in-person, over the phone, or via online methods. There’s no limit!

Rad Resources:

The American Evaluation Association is celebrating Theory and Practice week. The aea365 contributions all this week come from Dr. Ayesha Boyce and her University of North Carolina Greensboro graduate students’ reflections on evaluation theory and practice. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello all, My name is Justin Long and I am a 4th year M.S./PhD student at the University of North Carolina at Greensboro. I was always told to use methods and the evaluation theory that fit the context. If an evaluation is better served through a participatory evaluation then don’t force an deliberative-democratic evaluation approach. In some of my early work I found this out the hard way. It’s not just about the methods we use, it’s how we justify those methods. All evaluators come into the field with personal bias of what knowledge is and how it is created or discovered.

This is called our epistemological/ ontological view or framework and everyone has a framework whether they’re directly aware of it or not. There are two main camps of epistemologies I’d like to talk about. Don’t panic. First, knowledge exists in the world and we attempt to estimate it but we can never see it; that’s the post-positivist perspective. Second, knowledge is a human construction in which a researcher seeks to understand that construction; this is called constructivism. For practitioners, who may be less familiar with the epistemological underpinnings of evaluation, their epistemological framework is not always explicit and that’s a big problem. This framework affects how we justify and interpret chosen methods and not just inform the choice of methods used. Using a particular method doesn’t assume an epistemological framework, but rather how you would use and interpret the results. By not acknowledging our epistemological framework we are biased in our findings.

Lessons Learned:

Early in my career I worked with a community non-profit helping adults with special needs and their caregivers. During our work I decided to use a more post-positivist epistemology methodology driven by my experimental psychology background. It was a complete and utter disaster. I used a structured interview – no emotion, no follow up, just questions. It was the most awkward interview of my life. My ontology and epistemology biased my approach to methods. It didn’t fit the context and the results and interviewee suffered.

Rad Resources:

  • Carter, S.M., & Little, M. (2007). Justifying knowledge, justifying method, taking action: Epistemologies, methodologies, and methods in qualitative research. Qualitative Health Research, 17(10). 1316-1328.
  • Hatch, J. A. (2002). Doing qualitative research in education settings. Suny Press.

The American Evaluation Association is celebrating Theory and Practice week. The aea365 contributions all this week come from Dr. Ayesha Boyce and her University of North Carolina Greensboro graduate students’ reflections on evaluation theory and practice. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Greetings Everyone (and graduate students in particular)! My name is J. R. Moller and I am a first year Ph.D. student within the Educational Research Methodology Department at the University of North Carolina at Greensboro.

Prior to starting my Ph.D., I worked in evaluation and was trained on the job. I learned a lot, but nothing about theory and its tie-in to methodology (and of course how to apply that in practice!). As I am journeying through my graduate program, the importance of theory has become very clear. Similarly, the relationship between methodology and theory has been highlighted and is now at the forefront of my practice. While I am still learning about theories and different evaluation methods, it has become evident that theory and methodology are integrally linked and crucial in conducting strong evaluations. This linkage helps in framing the evaluation, identifying and developing the appropriate approach based on the evaluation aims, and even crafting the evaluation questions.

Lessons Learned 1: There is no perfect theory. However, understanding theories and identifying your own will help you in organizing yourself and your approach to different evaluations. You may not fit neatly into one theoretical orientation and that’s OK.

Lessons Learned 2: While your theory should be related to methodology, it cannot be the only thing that dictates your methods. It is important that the evaluation method(s) you select are appropriate to answering the evaluation question(s). For example, if your evaluation question is about how program participation affects participants one year after completion, you would not just collect attendance data for while the person was in the program, but would need a method that would allow you to follow-up with the program graduate one year later and have a basis for comparison.

Hot Tip: Speak to your advisor or evaluation mentor (or get one) in order to help parse the theories that you might use or want to use in your work. Talking it out is helpful!

Rad Resource 1: Mertens, D. M., & Wilson, A. T. (2012). Program Evaluation Theory and Practice: A Comprehensive Guide. New York: Guilford Publications.

This book is critical to so many things evaluation! From gaining just a cursory understanding of what evaluation is (terms and all) to understanding the paradigms, theories, and “ologies”, to the types of evaluations, implementing them, and then communicating findings, this book provides an excellent road map for understanding and performing evaluation with a theoretical lens. Check out chapters 2, 8, and 9 especially.

Rad Resource 2: Schwandt, T. A. (2015). Evaluation foundations revisited: cultivating a life of the mind for practice. Stanford University Press.

This book provides a palatable link between theory and practice. It provides clarity on what a theory is and examples of different types of theories. Chapters 1, 2, and 3 are particularly helpful in providing tangible links between theory, methods, and practice.

The American Evaluation Association is celebrating Theory and Practice week. The aea365 contributions all this week come from Dr. Ayesha Boyce and her University of North Carolina Greensboro graduate students’ reflections on evaluation theory and practice. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! My name is Adeyemo Adetogun, I’m a doctoral student in the Education Research Methodology department at the University of North Carolina at Greensboro (UNCG). My area of concentration is in program evaluation with a focus on Science Technology Engineering and Math (STEM) fields.

The very notion that we can evaluate anything, including evaluation itself, is a testament to the ubiquitous influence evaluation has on us as humans, and our activities. Accordingly, I found Shadish, Cook, and Levinton (1991) excerpts in their book, Foundations of Program Evaluation: Theories of Practice, to be useful in highlighting the interdependence of theory and practice in the field of evaluation. The body of knowledge concerned with organizing, categorizing, describing, predicting, explaining, understanding, and controlling a topic is equally as important as and deeply informs the body of knowledge that explicates the relationships between goals, processes, activities, conflict, or other issues experienced within the field of evaluation. Stated simply, our theory informs our practice.

A man recognized as the founder of social psychology, Kurt Lewin, once said, “. . . there is nothing so practical as good theory.” Another scholar, Michael Fullan, noted for his expertise on educational reform added, “. . . there is nothing so theoretical as good practice.” In all these, I see evaluation theory and practice as two halves of the same body; both needing each other to further develop the identity of the field.

Lessons Learned:

As I continue to learn and engage in research that will increase my understanding of evaluation theory and practice, a few lessons have emerged from my scholarship thus far. I lean on the suggestions of Shadish, Cook, and Levinton (1991) to articulate them more succinctly:

  1. Every evaluator should be well grounded in evaluation theory; otherwise they will be left to trial and error, or to professional lore in learning about appropriate methods. Consider that evaluation theories are like military strategy and tactics – methods are like military weapons and logistics. A good military commander with fine training and shrewdness needs to know strategy and tactics to deploy weapons properly, and should be able to organize logistics in different situations. A similar worldview must apply to a good evaluator – they need theories for the same reasons in choosing and deploying methods.
  2. Evaluation theory provide meaning for practice, and all evaluation practitioners are nascent evaluation theorists. They think about what they are doing, make considered judgments about which methods to use in each situation, weigh advantages and disadvantages of choices they face, and learn from successes and failures in their past evaluations.

Rad Resources: Check out this link for further reading:

The American Evaluation Association is celebrating Theory and Practice week. The aea365 contributions all this week come from Dr. Ayesha Boyce and her University of North Carolina Greensboro graduate students’ reflections on evaluation theory and practice. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello, my name is Jeremy Acree and I’m a Ph.D. student at the University of North Carolina at Greensboro (UNCG), focusing primarily on research methods and program evaluation. I previously worked as a middle school math teacher, and the connections between teaching and evaluation have been interesting to explore. I’m particularly drawn to the ways theory and practice inform each other. In teaching, pedagogical ideals can be difficult to implement in the ever-changing context of schools and classrooms. In evaluation, theory also represents the ideal, but can often be indistinguishable in practice. The approaches and methods we choose are often intended to lead to certain processes and findings, but stakeholder reactions and interpretations can vary based on factors that are beyond the evaluator’s control.

Some recent readings and conversations in my classes have focused on these dynamics of theory and practice. I know that many evaluators come to graduate programs after spending years working in the field. I entered my program at UNCG with a relatively blank slate in terms of both practical and theoretical perspectives, but I wonder about the benefits and drawbacks of different entry points. What do long-time practitioners gain by learning more about evaluation theory? What am I missing by building my knowledge of evaluation from theoretical approaches and concepts before fully understanding practice? Which should come first, knowledge of evaluation theory, or practical evaluation experience?

I don’t have answers to these questions, but I have started to piece together some thoughts about theory and practice.

Lesson Learned (Theory)

Evaluation theory isn’t a checklist or a prescriptive formula for conducting evaluation in practice. Evaluation is rooted in a rich history of social science, policy, and organizational management, and evaluation theory incorporates elements from these and other arenas to guide and justify what evaluation is and what it is intended to do. Theoretical concerns can be useful to practitioners, providing new perspectives and methods, and expanding notions of the role of evaluation.

Lesson Learned: (Practice)

Practice is at the heart of evaluation. Evaluators describe and construct values, raise and answer questions about program actions and outcomes, and provide judgments that inform stakeholder decision-making. These processes take place within varied contexts, for varied purposes, and through varied approaches, making evaluation complex, challenging, and difficult for theorists to conceptualize. Yet, while practitioners can learn from the broad guidance provided by theory, there is likely even more for theorists to consider in the nuances and intricacies of practice.

Rad Resources:

  • Chouinard, J. A., Boyce, A. S., Hicks, J., Jones, J., Long, J., Pitts, R., & Stockdale, M. (2017). Navigating theory and practice in evaluation fieldwork: Experiences of novice evaluation practitioners. American Journal of Evaluation, 38(4), 493–506.
  • Christie, C. a, & Christie, C. a. (2003). What Guides Evaluation? A Study of How Evaluation Practice Maps onto Evaluation Theory. New Directions for Evaluation, (97), 7–36. https://doi.org/10.1002/ev.72
  • Schwandt, T. A. (2014). On the Mutually Informing Relationship Between Practice and Theory in Evaluation. American Journal of Evaluation, 35(2), 231–236. https://doi.org/10.1177/1098214013503703

The American Evaluation Association is celebrating Theory and Practice week. The aea365 contributions all this week come from Dr. Ayesha Boyce and her University of North Carolina Greensboro graduate students’ reflections on evaluation theory and practice. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Ayesha Boyce and I am an assistant professor within the Educational Research Methodology Department at the University of North Carolina Greensboro. Our department offers a comprehensive curriculum in program evaluation with a social justice focus. Jill Anne Chouinard, Tiffany Smith, and I teach classes in program evaluation and research methodology where we emphasize good practice with mindful attentiveness to theoretical roots. Advanced evaluation theory is one of the seven program evaluation courses graduate students are able to enroll in. This course critically examines diverse approaches to the evaluation of education and social programs. The course analyzes four major branches evaluation (Alkin, 2013; Mertens & Wilson, 2012), demarcated by their major purpose and audience. Across paradigms, the course focuses on evaluation approaches’ assumptions about knowledge, views of social programs and social change, stances regarding the role and purpose of evaluation in society, location of values in evaluation, and intended utilization and applicability of evaluative findings. I was the instructor of the inaugural course, offered in Spring 2018.

Lesson Learned: Chatting with evaluation thought leaders virtually

One of the most exciting aspects of the course was to have renowned evaluation thought leader, Tom Schwandt, Skype into class twice. We used chapters from his Evaluation Foundations Revisited book and wanted to be able to converse with him about a few of the topics. As an evaluation educator, I have found that it doesn’t hurt to send an email to evaluation authors, scholars, and thought leaders to see if they might be interested in participating in a conversation with students virtually!

Lesson Learned: Innovative course activities

There are three aspects of the course that I found to work well with engaging the sometimes esoteric topic of evaluation theory.

  • I developed opportunities for students to role play as evaluators and stakeholders with differing values in multiple contexts which assisted in bringing theory into a more practical realm.
  • For the final assignment students were asked to present their papers in a variety of non-traditional representational forms, including case study critique, debate, interactive activity, simulated town meeting, narrative, poetry, or performance. This style of presentation, often championed by AEA president Leslie Goodyear and past president Jennifer Greene, allowed for creative and less formal presentations, which can be used when working with a variety of stakeholders and to engage with competing values and cultural ways of knowing.
  • Finally, I had each student write a blog post and I am pleased that for the next five days, you all will be able to read their reflections on evaluation theory and practice.

Rad Resources:

The American Evaluation Association is celebrating Theory and Practice week. The aea365 contributions all this week come from Dr. Ayesha Boyce and her University of North Carolina Greensboro graduate students’ reflections on evaluation theory and practiceDo you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

This is a post in the series commemorating pioneering evaluation publications in conjunction with Memorial Day in the USA (May 28).

My name is Sharon Rallis, a former AEA President and editor of the American Journal of Evaluation. Carol Weiss was a pioneering sociologist and program evaluator who helped create the field of evaluation. She was my advisor and teacher, and taught me how evaluations can be used “to improve policy and programming for the well-being of all” (1998, p.ix).

Carol H. Weiss (1927-2013)

Carol H. Weiss (1927-2013)

Carol Weiss believed that understanding and using evaluation means integrating theory with practice, a perspective exemplified in the 1995 article she wrote for the Aspen Institute about the importance of basing evaluations on solid theories of change that underlie interventions. This article, “Nothing as practical as a good theory: Exploring theory-based evaluation for comprehensive community initiatives?for children and families”, became a classic. Today we would say: it went viral. Both the article – and the phrase Nothing as practical as a good theory — remains one of the most influential, if not the most influential, in the history of program evaluation. The influence can be found in that virtually every philanthropic foundation, major government agency, nonprofit, and international development organization requires that a theory of change be included in funding proposals and development initiatives.

Carol was reacting to millions of dollars being poured into community change efforts with little recognition of contextual complexities. Theory-based evaluation asks program practitioners to make their assumptions explicit and to reach consensus with their colleagues about what they are trying to do and why. While difficult, these conversations help practitioners reach shared understandings and offer evaluators insight into the “leaps of faith” (p. 72) embedded in their formulations of programs. She wasn’t just suggesting that a bunch of program people get together to share ignorance and biases, and fabricate a theory of change out of thin air, though that’s often what happens; rather, she proposed that they grapple with how their intervention, that is, what they do, connects with intended outcomes. Weiss reported her experience that “Program developers with whom I have worked sometimes find this exercise as valuable a contribution to their thinking as the results of the actual evaluation. They find that it helps them re-think their practices and over time leads to greater focus and concentration of program energies” (p72).

Lessons Learned:

Evaluations that address the theoretical assumptions embedded in programs may have more influence on both policy and popular opinion. ?According to Carol, “theories represent the stories that people tell about how problems arise and how they can be solved” (p. 72). We all have stories about the causes of and solutions to social problems, and these stories – or theories – accurate or not, play powerful roles in policy discussion. “Policies that seem to violate the assumptions of prevailing stories will receive little support” (72). It follows that evaluations grounded in clear and shared theories of change can inform and influence policy discourse.

To summarize, Carol Weiss wrote: “Grounding evaluation in theories of change takes for granted that social programs are based on explicit or implicit theories about how and why the program will work. The evaluation should surface those theories and lay then out in as fine detail as possible, identifying all the assumptions and sub-assumptions built into the program” (1995, 66-67). The insights she brought in her 11 published books and numerous journal articles have shaped how we think about and practice evaluation today.

Rad Resources:

Weiss, C.H. (1995). Nothing as practical as good theory. Washington, DC: Aspen Institute.

Weiss, C.H. (1998). Evaluation: Methods for Studying Programs and Policies 2nd EditionPrentice Hall

Weiss, C.H. (1998). Have We Learned Anything New About the Use of EvaluationAmerican Journal of Evaluation,19: 21-33.

 

The American Evaluation Association is celebrating Memorial Week in Evaluation. The contributions this week are remembrances of pioneering and classic evaluation publications. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, I’m Steve Powell, a freelance evaluator. I’m really interested in theories of, about, and within evaluation, and the conceptual headaches they can bring with them.

Sometimes as evaluators we are so busy with practical challenges that we don’t have much time to worry if there is sufficient agreement about the meaning of the words we use – but that can lead to a lot of unnecessary arguments. Especially when we use difficult words like “attribution”, “impact” or “intention”, which different evaluation theories might define in different ways.

When doing workshops, I’ve found it useful to present exaggerated versions of these kinds of problems as evaluation paradoxes. Puzzles and paradoxes have often been used in philosophy, from Zeno and Zen to the Sufi mystics, to help us question and sharpen up our understanding of the words we use and demonstrate the importance of reflecting on evaluation theory. Applying different theories might lead to different conceptual responses to the paradoxes.

Below is one such paradox.

An evaluation puzzle: “Billionaire”

A billionaire left 10 million EUR in his will to establish a trust, with instructions that it should be used to “just do good”.

During the ten years since then, support for same-sex marriage has shifted from 10% to 80% public approval, and almost all liberals are now in favour.

In the ninth year, the trust gives 1 million to a campaign for a law on marriage equality, which substantially contributes to the passing of the law in the tenth year.

We don’t know for sure, but most likely the billionaire, like most of his peers and friends, did not support marriage equality when he was alive. But most of his peers and friends now support it.

Now, in the 11th year, you are asked to evaluate whether the trust was used effectively and whether the activities were relevant to the intentions of the billionaire.

If we tried to follow positivistic evaluation principles and retrospectively operationalised “doing good” by providing concrete indicators for it, we would have to decide whether to do this in a way which is acceptable now or which would have been acceptable ten years ago.  Whereas if we followed Michael Scriven’s ideas about the logic of valuing, we might even try to argue that there are ways of deciding what “doing good” is which are based on facts and not merely on what we or other people value.

So different evaluation theories give us different ways of deciding what “good” means.

And so, reflecting on evaluation paradoxes can help us to sharpen up our understanding of the words and concepts we use, and help us understand the importance of theory and their implications in practice.

Rad Resource:

I’ve posed a few more evaluation paradoxes here.

If you know of any similar paradoxes, I’d be really interested in hearing about them.

The American Evaluation Association is celebrating Theories of Evaluation  TIG week. All posts this week are contributed by members of the TOE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello.  I’m Eric Barela, Measurement & Evaluation Senior Manager at Salesforce.org and current Board Member of the American Evaluation Association.  I’m here to share how I use prescriptive evaluation theories in my everyday practice.

I have been an internal evaluator for over 15 years and have worked in a variety of organizations, from school districts to nonprofits to social enterprises.  Given the variety of organizations I have worked in, I have found that I tend to apply a variety of prescriptive theories, which are approaches generated by well-known evaluation scholars that serve as guides for different types of evaluation practice (e.g., Patton’s utilization-focused evaluation).  It all depends on what I need to ensure that I generate findings that are both useful and used.

While I use different theories to guide different evaluations, I often find myself needing to use multiple theories within the same evaluation.  I engage in quite a bit of what Leeuw & Donaldson refer to as theory knitting.  I like to think of myself knitting multiple prescriptive theories into a nice descriptive theory I can apply to my internal evaluation work.  I often find myself drawing from the following prescriptive theories:

  • House’s social justice evaluation to give voice to those who may be silenced within the organization;
  • House & Howe’s deliberative democratic evaluation to determine recommendations by considering relevant interests, values, and perspectives and by engaging in extensive dialogue with stakeholders
  • Chen’s theory-driven evaluation when an organization has been implementing a program without properly understanding the underlying theory under which it is meant to operate; and
  • Cousins’ participatory evaluation when my colleagues are sophisticated enough in their understanding of the evaluation enterprise (and are willing to set aside time to take part).

While I will often knit these prescriptive theories together in different combinations to guide my practice, there is one theory that always guides my approach: Patton’s utilization-focused evaluation.  As I wrote above, I need to ensure that I generate findings that are both useful AND used.  There is a big difference between useful findings and used findings.  As an internal evaluator I need to add value to the organization.  I can create an incredible evaluation report; however, if I deliver a report that does not resonate with my colleagues and they decide to not take action based on my recommendations, I could be out of a job.  As I have transitioned to the social enterprise sector, the ability to produce and add immediate value has become especially important.

To sum up, I knit together a variety of prescriptive theories to form a descriptive evaluation theory that guides my practice.  However, it is my focus on utilization that determines what \ theories I knit together.

Cool Trick:

Consider prescriptive theories as approaches you can use as needed, depending on the evaluation scenario.  Do a theory assessment, something similar to a needs assessment, but determining what theories might best serve the organization, as you start the evaluation process.  Ask yourself if there are some theories that will work better than others.

The American Evaluation Association is celebrating Theories of Evaluation  TIG week. All posts this week are contributed by members of the TOE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! I am Gisele Tchamba, past Chair of Behavioral Health TIG and Founder of ADR Evaluation Consulting.

Valuing from the Fourth Generation Evaluation (FGE) Perspective – Evaluation theorists in the valuing branch of Alkin’s theory tree have different views of valuing. Some share the perspectives of Third Generation Evaluation in which the evaluator as judge determines the value of the evaluand. Others, FGE theorists espouse the notion that the evaluator, as mediator, facilitates the placing of value by others. I appreciate the rigor and discipline of the FGE methodology, which is based on constructivism (subjective reality). The FGE empowers stakeholders by involving them in the determination of the worth of the evaluand. FGE brings together a diverse group of stakeholders to examine an issue of concern.

How it Works – I applied the FGE to understanding primary care providers’ perspectives of health benefits of moderate drinking.

Method

Data collection: this was a series of interrelated activities that was aimed at gathering good information to answer emergent evaluation question. Stakeholders were asked how they perceived the effects of moderate alcohol consumption on health and what influenced their perceptions.

Sampling: I used a theoretical sample of individuals that contributed to building the open and axial coding of the theory. In keeping with the FGE procedure, I collected data from nine stakeholders.  These were Physician Assistants (PAs). The PAs had expert knowledge and understanding of health benefits of moderate drinking that led to the theory of “conflict”.

Data analysis: In the FGE, data collection and analysis are conducted simultaneously. Following the FGE method, initial interviews were analyzed which guided the subsequent interviews. This process continued until data saturation was achieved. The FGE constant comparative was employed. This involved repeatedly comparing codes to codes. Codes were turned into categories through axial coding. This led to the formation of four main constructs from which the central construct or theory was developed. The theory was sent to stakeholders for confirmation or disconfirmation.

Lessons Learned:

  • FGE is often misunderstood and seldom used in evaluation programs.
  • FGE’s methods use meticulous rigor and trustworthiness.
  • Data analysis for FGE is complex, nonlinear, messy, yet rewarding.
  • Keep an open mind to field-based concerns, but the FGE has considerable strengths.
  • Stakeholders’ confirmation that the evaluation’s representation of their view is evidence that the evaluator kept their bias in check.

Rad Resources: 

Learn more about Constructivist Evaluation with this checklist:

Guba, Egon G.; Lincoln, Yvonna S. (2001). Guidelines and Checklist for Constructivist (a.ka. Fourth Generation) Evaluation. From http://www.wmich.edu/sites/default/files/attachments/u350/2014/constructivisteval

For a few examples of how to apply the FGE to real programs, check out:

The American Evaluation Association is celebrating Theories of Evaluation  TIG week. All posts this week are contributed by members of the TOE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Older posts >>

Archives

To top