AEA365 | A Tip-a-Day by and for Evaluators

CAT | Theories of Evaluation

This is a post in the series commemorating pioneering evaluation publications in conjunction with Memorial Day in the USA (May 28).

My name is Sharon Rallis, a former AEA President and editor of the American Journal of Evaluation. Carol Weiss was a pioneering sociologist and program evaluator who helped create the field of evaluation. She was my advisor and teacher, and taught me how evaluations can be used “to improve policy and programming for the well-being of all” (1998, p.ix).

Carol H. Weiss (1927-2013)

Carol H. Weiss (1927-2013)

Carol Weiss believed that understanding and using evaluation means integrating theory with practice, a perspective exemplified in the 1995 article she wrote for the Aspen Institute about the importance of basing evaluations on solid theories of change that underlie interventions. This article, “Nothing as practical as a good theory: Exploring theory-based evaluation for comprehensive community initiatives?for children and families”, became a classic. Today we would say: it went viral. Both the article – and the phrase Nothing as practical as a good theory — remains one of the most influential, if not the most influential, in the history of program evaluation. The influence can be found in that virtually every philanthropic foundation, major government agency, nonprofit, and international development organization requires that a theory of change be included in funding proposals and development initiatives.

Carol was reacting to millions of dollars being poured into community change efforts with little recognition of contextual complexities. Theory-based evaluation asks program practitioners to make their assumptions explicit and to reach consensus with their colleagues about what they are trying to do and why. While difficult, these conversations help practitioners reach shared understandings and offer evaluators insight into the “leaps of faith” (p. 72) embedded in their formulations of programs. She wasn’t just suggesting that a bunch of program people get together to share ignorance and biases, and fabricate a theory of change out of thin air, though that’s often what happens; rather, she proposed that they grapple with how their intervention, that is, what they do, connects with intended outcomes. Weiss reported her experience that “Program developers with whom I have worked sometimes find this exercise as valuable a contribution to their thinking as the results of the actual evaluation. They find that it helps them re-think their practices and over time leads to greater focus and concentration of program energies” (p72).

Lessons Learned:

Evaluations that address the theoretical assumptions embedded in programs may have more influence on both policy and popular opinion. ?According to Carol, “theories represent the stories that people tell about how problems arise and how they can be solved” (p. 72). We all have stories about the causes of and solutions to social problems, and these stories – or theories – accurate or not, play powerful roles in policy discussion. “Policies that seem to violate the assumptions of prevailing stories will receive little support” (72). It follows that evaluations grounded in clear and shared theories of change can inform and influence policy discourse.

To summarize, Carol Weiss wrote: “Grounding evaluation in theories of change takes for granted that social programs are based on explicit or implicit theories about how and why the program will work. The evaluation should surface those theories and lay then out in as fine detail as possible, identifying all the assumptions and sub-assumptions built into the program” (1995, 66-67). The insights she brought in her 11 published books and numerous journal articles have shaped how we think about and practice evaluation today.

Rad Resources:

Weiss, C.H. (1995). Nothing as practical as good theory. Washington, DC: Aspen Institute.

Weiss, C.H. (1998). Evaluation: Methods for Studying Programs and Policies 2nd EditionPrentice Hall

Weiss, C.H. (1998). Have We Learned Anything New About the Use of EvaluationAmerican Journal of Evaluation,19: 21-33.

 

The American Evaluation Association is celebrating Memorial Week in Evaluation. The contributions this week are remembrances of pioneering and classic evaluation publications. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, I’m Steve Powell, a freelance evaluator. I’m really interested in theories of, about, and within evaluation, and the conceptual headaches they can bring with them.

Sometimes as evaluators we are so busy with practical challenges that we don’t have much time to worry if there is sufficient agreement about the meaning of the words we use – but that can lead to a lot of unnecessary arguments. Especially when we use difficult words like “attribution”, “impact” or “intention”, which different evaluation theories might define in different ways.

When doing workshops, I’ve found it useful to present exaggerated versions of these kinds of problems as evaluation paradoxes. Puzzles and paradoxes have often been used in philosophy, from Zeno and Zen to the Sufi mystics, to help us question and sharpen up our understanding of the words we use and demonstrate the importance of reflecting on evaluation theory. Applying different theories might lead to different conceptual responses to the paradoxes.

Below is one such paradox.

An evaluation puzzle: “Billionaire”

A billionaire left 10 million EUR in his will to establish a trust, with instructions that it should be used to “just do good”.

During the ten years since then, support for same-sex marriage has shifted from 10% to 80% public approval, and almost all liberals are now in favour.

In the ninth year, the trust gives 1 million to a campaign for a law on marriage equality, which substantially contributes to the passing of the law in the tenth year.

We don’t know for sure, but most likely the billionaire, like most of his peers and friends, did not support marriage equality when he was alive. But most of his peers and friends now support it.

Now, in the 11th year, you are asked to evaluate whether the trust was used effectively and whether the activities were relevant to the intentions of the billionaire.

If we tried to follow positivistic evaluation principles and retrospectively operationalised “doing good” by providing concrete indicators for it, we would have to decide whether to do this in a way which is acceptable now or which would have been acceptable ten years ago.  Whereas if we followed Michael Scriven’s ideas about the logic of valuing, we might even try to argue that there are ways of deciding what “doing good” is which are based on facts and not merely on what we or other people value.

So different evaluation theories give us different ways of deciding what “good” means.

And so, reflecting on evaluation paradoxes can help us to sharpen up our understanding of the words and concepts we use, and help us understand the importance of theory and their implications in practice.

Rad Resource:

I’ve posed a few more evaluation paradoxes here.

If you know of any similar paradoxes, I’d be really interested in hearing about them.

The American Evaluation Association is celebrating Theories of Evaluation  TIG week. All posts this week are contributed by members of the TOE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello.  I’m Eric Barela, Measurement & Evaluation Senior Manager at Salesforce.org and current Board Member of the American Evaluation Association.  I’m here to share how I use prescriptive evaluation theories in my everyday practice.

I have been an internal evaluator for over 15 years and have worked in a variety of organizations, from school districts to nonprofits to social enterprises.  Given the variety of organizations I have worked in, I have found that I tend to apply a variety of prescriptive theories, which are approaches generated by well-known evaluation scholars that serve as guides for different types of evaluation practice (e.g., Patton’s utilization-focused evaluation).  It all depends on what I need to ensure that I generate findings that are both useful and used.

While I use different theories to guide different evaluations, I often find myself needing to use multiple theories within the same evaluation.  I engage in quite a bit of what Leeuw & Donaldson refer to as theory knitting.  I like to think of myself knitting multiple prescriptive theories into a nice descriptive theory I can apply to my internal evaluation work.  I often find myself drawing from the following prescriptive theories:

  • House’s social justice evaluation to give voice to those who may be silenced within the organization;
  • House & Howe’s deliberative democratic evaluation to determine recommendations by considering relevant interests, values, and perspectives and by engaging in extensive dialogue with stakeholders
  • Chen’s theory-driven evaluation when an organization has been implementing a program without properly understanding the underlying theory under which it is meant to operate; and
  • Cousins’ participatory evaluation when my colleagues are sophisticated enough in their understanding of the evaluation enterprise (and are willing to set aside time to take part).

While I will often knit these prescriptive theories together in different combinations to guide my practice, there is one theory that always guides my approach: Patton’s utilization-focused evaluation.  As I wrote above, I need to ensure that I generate findings that are both useful AND used.  There is a big difference between useful findings and used findings.  As an internal evaluator I need to add value to the organization.  I can create an incredible evaluation report; however, if I deliver a report that does not resonate with my colleagues and they decide to not take action based on my recommendations, I could be out of a job.  As I have transitioned to the social enterprise sector, the ability to produce and add immediate value has become especially important.

To sum up, I knit together a variety of prescriptive theories to form a descriptive evaluation theory that guides my practice.  However, it is my focus on utilization that determines what \ theories I knit together.

Cool Trick:

Consider prescriptive theories as approaches you can use as needed, depending on the evaluation scenario.  Do a theory assessment, something similar to a needs assessment, but determining what theories might best serve the organization, as you start the evaluation process.  Ask yourself if there are some theories that will work better than others.

The American Evaluation Association is celebrating Theories of Evaluation  TIG week. All posts this week are contributed by members of the TOE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! I am Gisele Tchamba, past Chair of Behavioral Health TIG and Founder of ADR Evaluation Consulting.

Valuing from the Fourth Generation Evaluation (FGE) Perspective – Evaluation theorists in the valuing branch of Alkin’s theory tree have different views of valuing. Some share the perspectives of Third Generation Evaluation in which the evaluator as judge determines the value of the evaluand. Others, FGE theorists espouse the notion that the evaluator, as mediator, facilitates the placing of value by others. I appreciate the rigor and discipline of the FGE methodology, which is based on constructivism (subjective reality). The FGE empowers stakeholders by involving them in the determination of the worth of the evaluand. FGE brings together a diverse group of stakeholders to examine an issue of concern.

How it Works – I applied the FGE to understanding primary care providers’ perspectives of health benefits of moderate drinking.

Method

Data collection: this was a series of interrelated activities that was aimed at gathering good information to answer emergent evaluation question. Stakeholders were asked how they perceived the effects of moderate alcohol consumption on health and what influenced their perceptions.

Sampling: I used a theoretical sample of individuals that contributed to building the open and axial coding of the theory. In keeping with the FGE procedure, I collected data from nine stakeholders.  These were Physician Assistants (PAs). The PAs had expert knowledge and understanding of health benefits of moderate drinking that led to the theory of “conflict”.

Data analysis: In the FGE, data collection and analysis are conducted simultaneously. Following the FGE method, initial interviews were analyzed which guided the subsequent interviews. This process continued until data saturation was achieved. The FGE constant comparative was employed. This involved repeatedly comparing codes to codes. Codes were turned into categories through axial coding. This led to the formation of four main constructs from which the central construct or theory was developed. The theory was sent to stakeholders for confirmation or disconfirmation.

Lessons Learned:

  • FGE is often misunderstood and seldom used in evaluation programs.
  • FGE’s methods use meticulous rigor and trustworthiness.
  • Data analysis for FGE is complex, nonlinear, messy, yet rewarding.
  • Keep an open mind to field-based concerns, but the FGE has considerable strengths.
  • Stakeholders’ confirmation that the evaluation’s representation of their view is evidence that the evaluator kept their bias in check.

Rad Resources: 

Learn more about Constructivist Evaluation with this checklist:

Guba, Egon G.; Lincoln, Yvonna S. (2001). Guidelines and Checklist for Constructivist (a.ka. Fourth Generation) Evaluation. From http://www.wmich.edu/sites/default/files/attachments/u350/2014/constructivisteval

For a few examples of how to apply the FGE to real programs, check out:

The American Evaluation Association is celebrating Theories of Evaluation  TIG week. All posts this week are contributed by members of the TOE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi y’all! I’m Jessica Shaw, Assistant Professor in the Boston College School of Social Work, and Program Chair of the Theories of Evaluation TIG. I teach program evaluation to social work clinicians in their final semester in graduate school. They are an eager bunch—to graduate that is, not so much to learn about evaluation theory. Here is how I teach evaluation theory…to non-evaluators…in one class session…that is only two hours long.

Hot Tips:

  1. Use Reading Groups: I find that the more readings I assign my students, the less likely they are to read them. Thus, I try to assign no more than 30 pages of reading each week. It is quite challenging to expose students to a number of different evaluation theories with enough depth on each for them to understand their differences in just 30 pages. Enter reading groups. Instead of having every student in the class read all assigned readings, I split them up into groups, and assign each a subset of the readings for the week. This semester, one group read about participatory evaluation (both the practical and transformative threads); another about culturally responsive evaluation; a third about utilization-focused evaluation; and a fourth about both theory-driven and goals-free evaluation (enabling them to compare and contrast the two).
  2. Have them teach one another: When assigning the readings, I let my students know that they are tasked with becoming as great an expert as they can in the short time allowed and few readings required, as they will be responsible for teaching their classmates about the theory they read. In class, each group is given the floor for 10-15 minutes to teach on their theory—What is this theory? What are its core principles or defining features? What is the role of the evaluator? Who participates in the evaluation? Are there special steps that must be taken? What should result if this theory is implemented as intended? After providing their instructions, each group fields questions from their classmates.
  3. Provide examples: I sit back as each group teaches. Once all questions have been answered, I step in, providing necessary clarifications, and also specific examples from my own work.
  4. Get excited: I love theory. I can’t hide it, nor do I want to do so. When my students are able to see how I excited I get in discussing theory, its nuance, and how it can have dramatic impacts on how we think about and make key decisions in evaluation, they get excited, too. And they engage. This semester, students were not required to discuss evaluation theory in any of their assignments. (Indeed, this is the first time evaluation theory has been explicitly taught as a part of our required program evaluation course). Yet, several of them wrote about it anyway, explaining what theory was guiding their evaluative decisions, and why. Who would have thought?

We shouldn’t just leave evaluation theory for theorists. It’s for everyone—practitioners, too.

The American Evaluation Association is celebrating Theories of Evaluation  TIG week. All posts this week are contributed by members of the TOE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! My name is Lyssa Wilson Becho and I currently serve as the Chair of the Theories of Evaluation TIG. I am also an evaluator at Western Michigan University’s Evaluation Center and a doctoral student in the Interdisciplinary Ph.D. in Evaluation at Western Michigan University.

I am excited to kick off this AEA365 week sponsored by the Theories of Evaluation TIG. Our TIG began back in 1992 with the express purpose to encourage and support the development, critique, and application of theoretical aspects of evaluation practice and research.  Throughout this week, we hope that you learn something about evaluation theory you might not have known before, are challenged to think of evaluation theory in a different light or are inspired to learn more about how evaluation theory applies to your own practice.

Why should you care about evaluation theory?

One of my favorite descriptions of evaluation theory is by Thomas Schwandt (2014). He describes evaluation theory as “repertoires of concepts, insights, explanations, and tools that professional practitioners can use as heuristics, tools ‘to think with.’ They are aids to the evaluation imagination, as practitioners come to understand the problems before them and how those problems might be solved.” (p. 234).

Evaluation theories helps us think about the practical problems we face in evaluation. What design would produce the most credible results to stakeholders? What and whose values should be included when writing evaluation questions? How much and at what points should stakeholders be involved in the evaluation process? These are the questions that evaluation theory helps us to intentionally consider and respond.  They help us navigate the hard decisions we are asked to make in the messy world of practice. Evaluation theories help us think differently about the possibilities of approaches, methods, and techniques we could use in our practice.

Rad Resources:

  • If you’re new to the idea of evaluation theory and want to learn more, there are a few central texts that can get you started. Alkin’s Evaluation Roots (2013) traces the history and epistemological origins of evaluation theorists, weaving them together into the evaluation theory tree. Texts by Stufflebeam and Coryn or Mertens and Wilson also help lay out the available theories and compare their utility in practice.
  • You can also learn about different evaluation approaches through checklists, or Better Evaluation’s repository of resources.
  • For more stories about evaluation theory used in practice, check out Jody Fitzpatrick’s Exemplar Evaluations interview with Katrina Bledsoe. These interviews turned into a collection of stories about evaluation in action, a book worth checking out if you’re interested in hearing more stories of evaluation theory in practice.
  • Our TIG hopes to add to the existing resources that make evaluation theory accessible and applicable to practitioners. We want to start by creating a collective bibliography on evaluation theory in the upcoming months. So, keep an eye on our website!

The American Evaluation Association is celebrating Theories of Evaluation  TIG week. All posts this week are contributed by members of the TOE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello, I am Carolyn Cohen, owner of Cohen Research & Evaluation, LLC, based in Seattle Washington. I specialize in program evaluation and strategic learning related to innovations in the social change and education arenas.  I have been infusing elements of Appreciative Inquiry into my work for many years.  Appreciative Inquiry is an asset-based approach, developed by David Cooperrider in the 1980s for use in organizational development. It is more recently applied in evaluation, following the release of Reframing Evaluation through Appreciative Inquiry by Hallie Preskill and Tessie Catsambas in 2006.

 Lessons Learned:

Appreciative Inquiry was originally conceived as a multi-stage process, often requiring a long-term time commitment. This comprehensive approach is called for in certain circumstances. However, in my practice I usually infuse discrete elements of Appreciative Inquiry on a smaller scale.  Following are two examples.

  • Launching a Theory of Change discussion. I preface Theory of Change conversations by leading clients through an abbreviated Appreciative Inquiry process.  This entails a combination of paired interviews and team meetings to:
    • identify peak work-related experiences
    • examine what contributed to those successes
    • categorize the resulting themes.

The experience primes participants to work as a team to study past experiences in  a safe and positive environment. They are then  able to craft  strategies, outcomes and goals. These elements become the cornerstone of developing a Theory of Change or a strategic plan, as well as an evaluation plan.

  • Conducting a needs assessment. Appreciative interviews followed by group discussions are a perfect approach for facilitating organization-wide or community meetings as part of a needs assessment process.   AI methods are  based on respectful  listening to each others stories, and are well-suited for situations where participants don’t know each other, or have little in common.

Using the resources listed below, you will find many more applications for Appreciative Inquiry in your work.

Rad Resources:

The American Evaluation Association is celebrating Best of aea365, an occasional series. The contributions for Best of aea365 are reposts of great blog articles from our earlier years. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I am Elizabeth O’Neill, Program Evaluator for Oregon’s State Unit on Aging and President-Elect for the Oregon Program Evaluators Network. I found myself on this unlikely route as an evaluator starting as a nonprofit program manager. As I witnessed the amazing dedication for producing community-based work, I wanted to know that the effort was substantiated. By examining institutional beliefs that a program was “helping” intended recipients, I found my way as a program evaluator and performance auditor for state government.  I wanted to share my thoughts on the seemingly oxymoronic angle I take to convince colleagues that we do not need evaluation, at least not for every part of service delivery.

In the last few years, I have found tremendous enthusiasm in the government sector for demonstrating progress towards protecting our most vulnerable citizens. As evaluation moves closer to program design, I now develop logic models as the grant is written rather than when the final report is due. Much of my work involves leading stakeholders in conversations to operationalize their hypotheses about theories of change. I draw extensively from a previous OPEN conference keynote presenter, Michael Quinn Patton, and his work on utilization-focused evaluation strategies to ensure evaluation is intended use by intended users. So you think I would thrilled to hear the oft-mentioned workgroup battle cry that “we need more metrics.”  Instead, I have found this idea to warrant more naval-gazing and less meaningful action.  I have noticed how metrics can be developed to quantify that work got done, rather than to measure the impact of our work.

Lesson Learned: The excitement about using metrics stems from wanting to substantiate our efforts and to feel accomplished with our day-to-day to activities. While process outcomes can be useful to monitor, the emphasis has to remain on long-term client outcomes.

Lesson Learned: As metrics become common parlance, evaluators can help move performance measurement to performance management so the data can reveal strategies for continuous improvement. I really like OPEN’s founder Mike Hendricks’ work in this area.

Lesson Learned: As we experience this exciting cultural shift to relying more and more on evaluation results, we need to have cogent ways to separate program monitoring, quality assurance and program evaluation.  There are times when measuring the number of times a workgroup convened may be needed for specific grant requirements, but we can’t lose sight of why the workgroup was convened in the first place.

Rad Resource: Stewart Donaldson with the Claremont Graduate Institute spoke at OPEN’s annual conference this year with spectacular response. Program Theory-Driven Evaluation Science: Strategies and Applications by Dr. Donaldson is a great book for evaluating program impact.

The American Evaluation Association is celebrating Oregon Program Evaluators Network (OPEN) Affiliate Week. The contributions all this week to aea365 come from OPEN members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hi, we’re Abhik Roy and Kristin A. Hobson, students and Doctoral Associates (we know what you’re thinking…wow…they must be rich) in the Interdisciplinary Ph.D. in Evaluation (IDPE) at Western Michigan University (WMU), and Dr. Chris L. S. Coryn, Professor of Evaluation, Measurement, and Research and Director of the IDPE (our boss…please tell him to pay us more). Recently, Abhik formulized a Scriven number and we wrote a paper on it entitled “What’s in a Scriven Number?”

Lesson Learned: What’s so important about a Scriven number? Since the article appeared, evaluators are asking each other “what’s your Scriven number?” Perhaps you’re new to the field of evaluation and have no idea what this means or the significance. Dr. Michael Scriven is widely considered the father of modern evaluation. His influence theoretically and in application within the field of evaluation has ben quite significant as his numerous manuscripts total over 400. In addition, Dr. Scriven is a past president of the American Educational Research Association and the American Evaluation Association. He is also an editor and co-founder of the Journal of MultiDisciplinary Evaluation.

Cool Trick: Determining your Scriven number. You may be asking, what’s a Scriven number? Well that’s what we’re here to explain. To put it simply, a Scriven number is a measure of collaborative distance, using both direct and indirect authorship, a person is from Dr. Scriven. Ok maybe that wasn’t so simple. Let’s try explaining this in a different way. A Scriven number is how far you, as an author of a published paper, are away from Dr. Scriven. In other words, Dr. Scriven has a Scriven number of zero, a person who has written a paper with Dr. Scriven has a Scriven number of one, a person who has written a paper with another person who wrote a paper with Dr. Scriven has a Scriven number of two, and so on. For example, using the paper Cook, Scriven, Coryn, and Evergreen (2010), Cook, Coryn, and Evergreen have a Scriven number of one. Now anyone who has published with Cook, Coryn, or Evergreen receives a Scriven number of two, unless the person has published with Dr. Scriven directly, then the person has a Scriven number of one. If a person has multiple Scriven numbers, his or her Scriven number is the lower number.

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello, I am Maxine Gilling, Research Associate for Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP). I recently completed my dissertation entitled How Politics, Economics, and Technology Influence Evaluation Requirements for Federally Funded Projects: A Historical Study of the Elementary and Secondary Education Act from 1965 to 2005. In this study, I examined the interaction of national political, economic, and technological factors as they influenced the concurrent evolution of federally mandated evaluation requirements.

Lessons Learned:

  • Program evaluation does not take place in a vacuum. The field and profession of program evaluation has grown and expanded over the last four decades and eight administrations due to political, economic, and technological factors.
  • Legislation drives evaluation policy. The Elementary and Secondary Education Act (ESEA) of 1965 established policies to provide “financial assistance to local educational agencies serving areas with concentrations of children from low-income families to expand and improve their educational program” (Public Law 89-10—Apr. 11, 1965). This legislation also had another consequence: it helped drive the establishment of educational program evaluation and the field of evaluation as a profession.
  • Economics influences evaluation policy and practice. For instance in the 1980’s evaluation took a downturn due to the stringent economic policies. Program evaluators resorted to lessons learned through writing journals and books.
  • Technology influences evaluation policy and practice. The rapid emergence of new technologies all contributed to changing goals, standards, and methods and values underlying program evaluation.

Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · · · · · ·

Older posts >>

Archives

To top