AEA365 | A Tip-a-Day by and for Evaluators

TAG | theory

Greetings!  Welcome to the Community Psychology TIG Week!  I, Carissa Coleman, a Community Psychologist from James Bell Associates as well as the other members of the TIG Leadership Team, welcome you to a week of Community Psychology and our influence in evaluation work.

Our Community Psychology ideals spread across many disciplines including psychology, social work, education, medicine, and all types of prevention work.

We invite you to visit our website at to learn more.

Moving our Field: Toward Theory, Systems, and Dynamic Methods

As a Community Psychologist, I, Leonard A. Jason from DePaul University, would like to offer three ideas that have the potential to energize and transform our field. They involve theoretical perspectives, appreciation of the complexities of the natural world, and dynamic methodological tools that can be used to capture these complex processes.

Many of us work in the field of evaluation to better understand the relationship between people and their contexts in ways that might alleviate human suffering.  Yet, as argued in a recent special issue on Theories in the Field of Community Psychology, the ideological nature of our work that prioritizes efforts to improve people’s lives can result in less willingness to consider the possible contribution of theory.  I am not arguing that our work will coalesce around only one theory, but I believe there has been an unfortunate reluctance to attempt to develop predictive theory, in part because it is seen as a distraction from taking action. However, there is no obvious reason why sound theory cannot be developed that increases the effectiveness of our social action efforts and accomplishes our goal to better understand the complexities of people and groups living within multifaceted ecosystems.

Theory must contend with a natural world that is endlessly beautiful and elegant, but also one that often feels mysterious, unpredictable, and filled with contradictions. Dynamic feedback loops are the norm within this organic stew, and as a consequence, our work would be more contextually rich if it transcended reductionistic and simplistic linear cause and effect methods. Theories can help us capture a systems’ point of view, where the reality of the ever-changing world is made up of mutual interdependencies regarding how people adapt to and become effective in diverse social environments.

Rad Resource:  Are there methods that help us conceptualize and empirically describe these transactional dynamics? There are, such as those contained within the Handbook of Methodological Approaches to Community-Based Research, which profiles a new generation of quantitative and qualitative research methods that are holistic, culturally valid, and support contextually- and theoretically-grounded community interventions. Mixing qualitative and quantitative research methods can provide deeper exploration of causal mechanisms, interpretation of variables, and contextual factors that may mediate or moderate the topic of study. Theories and sophisticated statistical methods can help us address questions of importance for the communities in which and with whom we work by capturing the dynamics of complex systems and providing us the potential to transform our communities in fresh and innovative ways.

The American Evaluation Association is celebrating Community Psychology TIG Week with our colleagues in the CP AEA Topical Interest Group. The contributions all this week to aea365 come from our CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.


Hi! I’m Johanna Morariu, Senior Associate with Innovation Network. Innovation Network ( is an evaluation consulting firm that provides evaluation consulting services to nonprofits and funders, and works to build the sector’s evaluation capacity. I lead Innovation Network’s environmental evaluation work, with a strong concentration on environmental advocacy evaluation.

A few months ago we began working with the Post Carbon Institute to develop an organizational evaluation framework. The Post Carbon Institute provides individuals, communities, businesses, and governments with the resources needed to understand and respond to the interrelated economic, energy, environmental, and equity crises that define the 21st century. As the organization identifies with being both a think tank organization and an advocacy organization, the evaluation team knew both elements needed to be present in the evaluation framework.

Since Innovation Network has been at the forefront of the advocacy evaluation field development, over the years we have amassed ample advocacy evaluation resources (free log in required). Early on we identified a need to supplement our existing expertise with a field scan of think tank evaluation literature. We found four resources that were particularly helpful, and we synthesized the resources to create a matrix of assessment areas.

Rad Resources:

  • Donald E. Abelson (2010). Is Anybody Listening? Assessing the Influence of Think Tanks. Chapter 1 in the edited volume, Think Tanks and Public Policies in Latin America.
  • Richard Bumgarner, Douglas Hattaway, Geoffery Lamb, James G. McGann, and Holly Wise (2006). Center for Global Development: Evaluation of Impact. Arabella Philanthropic Investment Advisors, LLC for the Bill & Melinda Gates Foundation, the William and Flora Hewlett Foundation, the John D. and Catherine T. MacArthur Foundation, and the Rockefeller Foundation.
  • Ingie Hovland (2007). Making a Difference: M&E of Policy Research. Working paper 281 for the Overseas Development Institute, London, UK.
  • James G. McGann (2006). Best Practices for Funding and Evaluating Think Tanks & Policy Research. McGann Associates for the William and Flora Hewlett Foundation.

The next step was to fine tune the areas of assessment for think tank evaluation and advocacy evaluation with interview data collected from key informants. Then, in consultation with key staff from the Post Carbon Institute, we created an organizational theory of change. The theory contains information about the organization’s mission; audiences; strategies; focusing events, crises, and windows of opportunity; desired shifts, and impact.

From the theory of change, the evaluators developed corresponding outcomes, indicators, and data collection tools, designed to provide actionable information about organizational strategy. A three tier approach to data collection was recommended—ongoing (e.g., meeting tracking, media tracking, champion tracking), annual (e.g., Bellwether interviews, partner survey, capacity assessment), and as needed (e.g., Intense Period Debrief).

The American Evaluation Association is celebrating Earthweek with our colleagues in the Environmental Program Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.


Hi! I’m Carey Tisdal, Director of Tisdal Consulting, an independent firm that evaluates informal learning environments. Informal learning environments include museums (art, history, science, and children’s museums), science-technology centers, zoos, aquaria, parks, television, and radio. I worked as an internal evaluator for nine years and for six as an external evaluator. Recently, field-building and professional developments have been a focus of several projects funded by the National Science Foundation. I am evaluating one of these projects, ExhibitFiles. ExhibitFiles is an online community for exhibit designers and exhibition developers. One goal of the site is to provide a place where exhibition developers find out about each other’s work. Members can upload case studies, reviews of exhibits they have visited, and useful “bits” about the exhibit design processes and materials. Evaluation reports may be attached to case studies. A related goal is the development professional networks for the sharing of expertise. Registered members post profiles and contact information. My Visitor Studies Week blog for AEA365 shares an important insight about continuing to learn as we do our work.

Lessons Learned: Actually, lessons re-learned! In this project, the client and I have found formal theory very helpful in thinking about the site and understanding how people use it. I was reminded of Kurt Lewin’s wonderful 1951 pronouncement that “There is nothing so practical as a good theory.” We found theories comparing and contrasting communities of practice and communities of interest using of digital information (Hoadley & Kilner, 2005) especially helpful in understanding how exhibition developers incorporated the site experience into their work. For example, specific reviews are sometimes serving as boundary objects for people working in different disciplinary areas and with different training and experiences to develop a common language about a design topic. Since this site is only one element in a range of professional development activities, we have used concepts about the ecology of learning (Brown, 1999) to start understanding the role of ExhibitFiles as one among a set of professional development activities in which exhibition developer participate. Using a theoretical lens as part of the evaluation has helped the project team (clients) and the evaluators develop a common language and set of ideas to support their decisions about updating site and planning its future. Formal theory can sometimes be a boundary object for evaluators and clients.

Rad Resource

Brown, J.S. (1999). Presentation at the Conference on Higher Education of the American Association for Higher Education. Retrieve August 15, 2010 from

Rad Resource

Hoadley, C.M. & Kilner, P.G. (2005). Using technology to transform communities of practice into knowledge-building communities. SIGGROUP Bulletin, 25(31).

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. We are pleased to welcome colleagues from the Visitor Studies Association – many of whom are also members of AEA – as guest contributors this week. Look for contributions preceded by “VSA Week” here and on AEA’s weekly headlines and resources list.

· ·

My name is Jack Mills; I’m a full-time independent evaluator with projects in K-12 and higher education. I took my first course in program evaluation in 1976. After a career in healthcare administration, I started work as a full-time evaluator in 2001. The field had expanded tremendously in those 25 years. As a time traveler, the biggest change I noticed was the bewildering plethora of writing on theory in evaluation. Surely this must be as daunting for students and newcomers to the field as it was for me.

Rad Resource: My rad resource is like the sign on the wall at an art museum exhibit—that little bit of explanation that puts the works of art into a context, taking away some of the initial confusion about what it all means. Stewart Donaldson and Mark Lipsey’s 2006 article explains that there are three essential types of theory in evaluation: 1) the theory of what makes for a good evaluation; 2) the program theory that ties together assumptions that program operators make about their clients, program interventions and the desired outcomes; and 3) social science theory that attempts to go beyond time and place in order to explain why people act or think in certain ways.

As an example, we used theory to evaluate a training program designed to prepare ethnically diverse undergraduates for advanced careers in science. Beyond coming up with a body count of how many students advanced to graduate school, we wanted to see if the program had engendered a climate that might have impacted their plans. In this case, the program theory is that students need a combination of mentoring, research experience, and support to be prepared to move to the next level. The social science view is that students also need to develop a sense of self-efficacy and the expectation that advanced training will lead to worthwhile outcomes, such as the opportunity to use one’s research to help others. If the social science theory has merit, a training program designed to maximize self-efficacy and outcome expectations would be more effective than one that only places students in labs and assigns them mentors. An astute program manager might look at the literature on the sources of self-efficacy and engineer the program to reinforce opportunities that engender it.

This aea365 contribution is part of College Access Programs week sponsored by AEA’s College Access Programs Topical Interest Group. Be sure to subscribe to AEA’s Headlines and Resources weekly update in order to tap into great CAP resources! And, if you want to learn more from Jack, check out the CAP Sponsored Sessions on the program for Evaluation 2010, November 10-13 in San Antonio.

Hi! My name is Michael Szanyi. I am a doctoral student at Claremont Graduate University.  I’ve been studying what areas practitioners think there needs to be more research on evaluation on, and I’d like to share a rad resource with you.

Rad Resource: Whenever I need inspiration to come up with a research on evaluation idea, I refer to Melvin Mark’s chapter “Building a Better Evidence Base for Evaluation Theory” in Fundamental Issues in Evaluation, edited by Nick Smith and Paul Brandon. I re-read this chapter every time I need to remind myself of what research on evaluation actually is and when I need to get my creative juices flowing.

I think this is a rad resource because:

  • Mark explains why research on evaluation is even necessary, citing both potential benefits and caveats to carrying out research on evaluation.
  • The chapter outlines 4 potential subjects of inquiry (context, activities, consequences, professional issues) that can spark ideas in those categories, subcategories, and entirely different areas all together.
  • The resource also describes 4 potential inquiry modes that you could use to actually carry out whatever ideas begin to emerge.
  • Particularly for my demographic, it helps those in graduate programs come up with potential research and dissertation topics.

Although research on evaluation is a contentious topic in some quarters of the evaluation community, this resource helps to remind me that research on evaluation can be useful. It can help to build a better evidence base upon which to conduct more efficient and effective evaluation practice.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to

· ·


To top