This is a post in the series commemorating pioneering evaluation publications in conjunction with Memorial Day in the USA (May 28).
My name is Richard Krueger and I was on the AEA Board in 2002 and AEA President in 2003.
In 2002 and 2003 the American Evaluation Association (AEA) for the first time adopted and disseminated formal positions aimed at influencing public policy. The statements and process of creating and endorsing them were controversial. Some prominent AEA members vociferously left the Association in opposition to taking such positions. Most recently, AEA joined in endorsing the 2017 and 2018 Marches for Sciences. Here are the original two statements that first involved AEA in staking out public policy positions.
2002 Position Statement on HIGH STAKES TESTING in PreK-12 Education
High stakes testing leads to under-serving or mis-serving all students, especially the most needy and vulnerable, thereby violating the principle of “do no harm.” The American Evaluation Association opposes the use of tests as the sole or primary criterion for making decisions with serious negative consequences for students, educators, and schools. The AEA supports systems of assessment and accountability that help education.
2003 Position Statement on Scientifically Based Evaluation Methods.
The AEA Statement was developed in response to a Request to Comment in the Federal Register submitted by the Secretary of the US Department of Education. The AEA statement was reviewed and endorsed by the 2003 and 2004 Executive Committees of the Association.
The statement included the following points:
(1) Studies capable of determining causality. Randomized control group trials (RCTs) are not the only studies capable of generating understandings of causality. In medicine, causality has been conclusively shown in some instances without RCTs, for example, in linking smoking to lung cancer and infested rats to bubonic plague. The proposal would elevate experimental over quasi-experimental, observational, single-subject, and other designs which are sometimes more feasible and equally valid.
RCTs are not always best for determining causality and can be misleading. RCTs examine a limited number of isolated factors that are neither limited nor isolated in natural settings. The complex nature of causality and the multitude of actual influences on outcomes render RCTs less capable of discovering causality than designs sensitive to local culture and conditions and open to unanticipated causal factors.
RCTs should sometimes be ruled out for reasons of ethics.
(2) The issue of whether newer inquiry methods are sufficiently rigorous was settled long ago. Actual practice and many published examples demonstrate that alternative and mixed methods are rigorous and scientific. To discourage a repertoire of methods would force evaluators backward. We strongly disagree that the methodological “benefits of the proposed priority justify the costs.”
(3) Sound policy decisions benefit from data illustrating not only causality but also conditionality. Fettering evaluators with unnecessary and unreasonable constraints would deny information needed by policy-makers.
While we agree with the intent of ensuring that federally sponsored programs be “evaluated using scientifically based research . . . to determine the effectiveness of a project intervention,” we do not agree that “evaluation methods using an experimental design are best for determining project effectiveness.” We believe that the constraints in the proposed priority would deny use of other needed, proven, and scientifically credible evaluation methods, resulting in fruitless expenditures on some large contracts while leaving other public programs unevaluated entirely.
AEA members have connections within governments, foundations, non-profits and educational organizations, and perhaps our most precious gift is to help society in general (and decision-makers specifically) to make careful and thoughtful decisions using empirical evidence.
The American Evaluation Association is celebrating Memorial Week in Evaluation. The contributions this week are remembrances of pioneering and classic evaluation publications. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.