AEA365 | A Tip-a-Day by and for Evaluators

TAG | Extension

My name is Cheryl Peters and I am the Evaluation Specialist for Michigan State University Extension, working across all program areas.

Measuring collective impact of agricultural programs in a state with diverse commodities is challenging. Many states have an abundance of natural resources like fresh water sources, minerals, and woodlands. Air, water and soil quality must be sustained while fruit, vegetable, crop, livestock and ornamental industries remain efficient in yields, quality and input costs.

Extension’s outreach and educational programs operate on different scales in each state of the nation: individual efforts, issue-focused work teams, and work groups based on commodity types. Program evaluation efforts contribute to statewide assessment reports demonstrating the value of Extension Agricultural programs, including public value. Having different program scales allows applied researchers to align to the same outcome indicators as program staff.

Hot Tip: Just as Extension education has multiple pieces (e.g., visits, meetings, factsheets, articles, demonstrations), program evaluation has multiple pieces (e.g., individual program evaluation about participant adoption practices, changes in a benchmark documented from a secondary source, and impact assessment from modeling or extrapolating estimates based on data collected from clientele).

Hot Tip:  All programs should generate evaluation data related to identified, standardized outcomes. What differs in the evaluation of agriculture programs is the evaluation design, including sample and calculation of values. Impact reports may be directed at commodity groups, legislature, farming groups, and constituents. State Extension agriculture outcomes can use the USDA impact metrics. Additionally, 2014 federal requirements for competitive funds now state that projects must demonstrate impact within a project period. Writing meaningful outcomes and impact statements continues to be a focus of USDA National Institute of Food and Agriculture (NIFA).

Hot Tip: Standardizing indictors into measurable units has made aggregation of statewide outcomes possible. Examples include pounds or tons of an agricultural commodity, dollars, acres, number of farms, and number of animal units. Units are then reported by the practice adopted. Dollars estimated by growers/farmers are extrapolated from research values or secondary data sources.

Hot Tip: Peer-learning with panels to demonstrate scales and types of evaluation with examples has been very successful. There are common issues and evaluation decisions across programming areas. Setting up formulas and spreadsheets for future data collection and sharing extrapolation values has been helpful to keep program evaluation efforts going. Surveying similar audiences with both outcomes and program needs assessment has also been valuable.

Rad resource: NIFA  provides answers to frequently asked questions such as when to use program logic models, how to report outcomes, and how logic models are part of evaluability assessments.  

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi! This is Laura Downey with Mississippi State University Extension Service. In my job as an evaluation specialist, I commonly receive requests to help colleagues develop a program logic model. I am always thankful when I receive such a request early in the program development process. So, I was delighted a few weeks ago when academic and community colleagues asked me to facilitate the development of a logic model for a grant proposing to use a community-based participatory research (CBPR) approach to evaluate a statewide health policy. For those of you who are not familiar with CBPR, it is a collaborative research approach designed to ensure participation by communities throughout the research process.

As I began to assemble resources to inform this group’s CBPR logic model, I discovered a Conceptual Logic Model for CBPR available on the University of New Mexico’s School of Medicine, Center for Participatory Research, website.


Clipped from http://fcm.unm.edu/cpr/cbpr_model.html

Rad Resource:

What looked like a simple conceptual logic model at first glance was actually a web-based tool complete with metrics and measures (instrument) to assess CBPR processes and outcomes. Over 50 instruments related to the most common concepts in CBPR, concepts such as organizational capacity; group relational dynamics; empowerment; and community capacity are profiled and available through this tool. The profile includes the instrument name; a link to original source; the number of items in the instrument; concept(s) original assessed; reliability; validity; and identification of the population created with.

With great ease, I was able to download surveys to measure those CBPR concepts in the logic model that were relevant to the group I was assisting. Given the policy-focus of that specific project, I explored those measures related to policy impact.

Hot Tip:

Even if you do not typically take a CBPR approach to program development, implementation, and/or evaluation, the CBPR Conceptual Logic Model website might have a resource relevant to your current or future evaluation work.

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi, I’m Siri Scott, and I work as a Graduate Assistant with the University of Minnesota Extension.

Conducting interviews with youth is one way to consider gathering information for program improvement, especially when you want to bring youth voice into your improvement process. While more time intensive than doing a survey, interviews will provide you with rich contextual information. Below is a brief overview of the planning process as well as tips for conducting interviews.

Hot Tips: Planning Phase

One main decision is whether or not you will need IRB approval for conducting interviews. Even when done for program improvement purposes, it is a good idea to comply with IRB regulations for data practices and protection for youth. Other major decision points in the planning process include how many individuals you will interview, how you will choose your participants, and how you will collect and analyze your data. In addition, you must decide what type of interview you want to conduct, the purpose of the interview, and then create an interview protocol (if appropriate).

Hot Tips: Conducting interviews

Here are some tips for conducting interviews:

  • Practice: Test the use of the protocol with a colleague (or a young person who you know well) and ask for feedback about the questions themselves and how they fit together.
  • Space: Find a quiet, secluded space for the interview in a public setting (or in a private home if the young person’s guardian can be present). You don’t want other people overhearing your conversation or your participant being distracted by anything.
  • Warm up: Start the interview with some informal chit chat. This will build your rapport with the participant and ease the participant’s (and your) nerves.
  • Probe: If you are not doing a structured interview, make sure to ask participants to clarify or elaborate on their responses. This will provide you with much better data.
  • Notes: If you are using an audio recorder, don’t trust it (fully). Jot down some quick notes during the interview. You can elaborate on these later if the audio recorder malfunctions.
  • Relax! If you mess up, that’s okay. Also, if you’re nervous and tense, the participant will sense that. Do whatever you can to put the participant (and yourself) at ease.
  • Learn More:  A good resource for learning about how to conduct interviews is this newly released, comprehensive overview of the interviewing process:  InterViews: Learning the Craft of Qualitative Research Interviewing (3rd Edition) by Brinkman and Kvale (2014).

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Howdy! I am Kevin Andrews, a program specialist at Texas A&M AgriLife Extension Service. In addition to my Extension duties, I co-teach a graduate evaluation course at Texas A&M University.

I came across a post from March about students partnering with community agencies to apply their evaluation skills. I’d like to build upon Dr. Brun’s idea for evaluators who have ties to a university, especially those in Extension.

Many of our students have no idea what extension (or any other agency) is. Any engaged university seeks to tie together the scholarships of teaching, research, and service, and hands-on evaluations are a perfect way to accomplish this.

Lessons Learned: By allowing students to partner with us on evaluations, they not only receive practical experience and make an impact, they also get to learn who we are. This can aid in recruiting talented students to work for the agency; we’ve had several ask about careers in extension.

Hot Tip: Students are going to ask a lot of questions. We can get pretty set in our ways and think we know our agency well. When you have to pause to explain why we do what we do in basic terms, you are forced to reflect on exactly why it is we have been doing things a certain way all these years!

Hot Tip: Our employees just want their voices heard. With students conducting interviews we get far more coverage than a single evaluator using a sample, and employees are able to feel their opinions matter. Our staff is also much more likely to be open with a student than they are a peer.

Lessons Learned: I like to be in total control over my projects, but part of delegating work is letting others do their own thing. By developing goals together early in the project, I can ensure the outcome is as I intended while allowing students to experiment and develop their own processes.

Hot Tip: Often, when a class is over, the student-teacher relationship ends. Keep contact information and follow up with students a year later to let them know the impact of their work. No matter where life takes them, they are your stakeholders and you want them to hold you in high esteem.

Lessons Learned: I’m lucky to get to straddle teaching and Extension. For those who don’t simply reach out and ask! I’ve been approached by others with projects for students, and I’ve approached others with projects of my own. Everyone has something they need done!

Two years ago, I was the student participating in a class evaluation. Three from my class, including myself, now work for Extension and our report generated $200,000 of funding – the model works!

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Greetings! My name is Suzanne Le Menestrel and I am a National Program Leader for Youth Development Research at the 4-H National Headquarters, National Institute of Food and Agriculture, U.S. Department of Agriculture.  4-H is a national youth development organization serving 6 million youth throughout the country. We partner with the nation’s Cooperative Extension system operated by the more than 100 land-grant universities and colleges and with National 4-H Council, our private, non-profit partner. Recent trends in funding have elevated the importance of illustrating impact and accountability for nonformal educational programs.  We were also interested in building capacity for evaluation through the creation of easy-to-use and accessible tools.  We partnered with National 4-H Council, state 4-H program leaders, 4-H specialists and Extension evaluators from around the country to create a national 4-H common measures system that will also enable us to aggregate data across very diverse 4-H programs.

I have learned a number of lessons through the implementation of this new system.

Lessons Learned:

    • Common measures must be developmentally appropriate. Children and youth who participate in 4-H range in age from ages 5 to 19.  Because of concerns about reading levels and developmental appropriateness, we focused the common measures on ages 9 to 18. We also divided up the measures into two levels—one for children and youth in grades 4 through 7 and one for youth in grades 8 through 12.
    • Common measures must have strong psychometric properties.  As much as possible, we drew from existing measures but have been conducting analyses with both pilot and preliminary data.
    • Measures must be applicable to a broad variety of programs. 4-H looks very different from county to county and state to state. We started with the creation of a national 4-H logic model that represents desired program outcomes.

Clipped from http://www.4-h.org/about/youth-development-research/

 

  • Common measures must be available through a flexible, easy-to-use, and robust on-line platform.  This includes the ability to add custom items.
  • Training and technical assistance are key to the implementation of common measures in a complex, multi-faceted organization such as 4-H.
  • Buy-in and support from stakeholders is critical as is creating an ongoing system for soliciting stakeholder feedback.
  • Such a system cannot be developed without sufficient funding to support the on-line platform, technical assistance, and on-going formative evaluation.
  • Common measures are a flexible product that needs to grow and change with the outcomes of the organization.

Rad Resource:

Check out this article written by Pam Payne and Dan McDonald on using common evaluation instruments.

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Melissa Cater, and I am an assistant professor and evaluation specialist at Louisiana State University AgCenter. I am also serving as Chair of the AEA Extension Education Evaluation Topical Interest Group (EEE-TIG) this year. The EEE-TIG provides a professional development home for Extension professionals who are interested in program evaluation; we also welcome other individuals who are evaluating non-formal education outreach programs in a community setting. The EEE-TIG goals provide a guiding framework for the membership.

Hot Tip: Our TIG has provided a place for Extension professionals to become more collaborative. If you are searching for a way to become more involved in evaluation, join a TIG. The networking opportunities are endless.

This week’s aea365 blog posts are sponsored by the EEE-TIG. I invite you to learn more about who we through this week’s series of posts. You’ll see that we have a range of interests within our membership from evaluating agricultural programs, to teaching evaluation, to supporting participatory community research, to building evaluation capacity.

Hot Tip: You can learn even more about the EEE-TIG and the varied interests of our members by viewing our archived blog posts.


Clipped from http://www.joe.org

Hot Tip: Want to learn about the diversity of programs that are being evaluated in Extension? Check out the Journal of Extension to see the breadth of topics.

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! I’m Sheila B. Robinson, aea365’s Leader Volunteer Curator. I teach Program Evaluation Methods at University of Rochester’s Warner School of Education, and am a grant coordinator for Greece Central School District in Rochester, NY. In my spare time, I read, learn and blog about evaluation, and it’s no exaggeration to say I never come away from my computer, book, article on evaluation without  learning something. “Ancora imparo” is attributed to Michelangelo in his late 80s!

As I’m once again preparing my syllabus, I’m reflecting on a wealth of free and low-cost evaluation resources. Since I always want to improve the course for my students, I’m look for new readings and activities to ensure my course is up-to-date, and that my students learn about the “big names” and big issues in the evaluation community today.

Lesson Learned: I’m convinced evaluators are the most collegial, collaborative, and generous people ever, and I’m always impressed with how many of them are willing to share their knowledge and resources with everyone.

Hot Tips:

1.) Fill your toolbox! Susan Kistler, AEA’s Executive Director Emeritus, has contributed numerous aea365 posts on free or low-cost technology tools. Search her name, or glance through aea365 archive for links and descriptions.

2.) Join the conversations! Mentioned before, but definitely worth another look: AEA’s LinkedIn discussion, and EvalTalk – two places I’ve learned about the multitude of websites, textbooks, and articles on evaluation, many of which have made their way into my course. Here’s a link to a discussion on “Comprehensive set of websites on evaluation and research methods.” I recently asked EvalTalk for some “must-read journal articles for program evaluation students” and got great responses; some people even sent me their syllabi!  Cool trick: I’ve copied rich EvalTalk and LinkedIn discussions on a topic of interest (e.g. pre- and post-testing) to share with students as examples of the types of discussions evaluators have in “the real world” of evaluation work.

3.) Cull from collections! Who doesn’t love one-stop shopping? My favorite place for great collections is AEA’s site. Check out everything under the Reading, Learning, and Community tabs and all the links on the main page. Check out Evaluator and Evaluation blogs and evaluators on Twitter. Chris Lysy maintains a large collection of evaluation-related blogs at EvalCentral. Gene Shackman has amassed probably the largest collection of Free Resources for Program Evaluation and Social Research Methods.

4.) “Extend” your learning! Google “evaluation” + “extension” and find a universe of free tools and resources from university extension programs. Here are just a few:  University of Wisconsin-Extension, Penn State Extension, NC Cooperative Extension, K-State research and Extension. I stumbled upon this collection at University of Kentucky’s Program Development and Evaluation Resources.

Apprendimento felice! (Happy learning!)

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· · ·

Greetings from the Last Frontier. I’m Alda Norris, webmaster for the Alaska Evaluation Network (AKEN) and evaluation specialist for the University of Alaska Fairbanks Cooperative Extension Service (CES).

The faculty and staff I work with at CES are experts in a variety of fields, from horticulture, entomology and forestry to economics, nutrition and child development. That adds up to quite an interdisciplinary organization! Our diversity makes for fantastic collaborations, as well as complicated syntheses. Lucky for me, my PhD is in interpersonal communication, which applies across the board.

Lessons Learned:  Ask people to tell you the inspiration behind their projects. Every group has a story to tell.What common goals bring these people together?Inquiring about the “why” and not just the “what” of a program really benefits capacity building efforts. I got to know CES better while writing a Wikipedia entry. Hearing and reading about the contributions Extension has made in Alaska since the 1930s deepened my understanding of what led up to each of our program’s current priorities and logic models.

  • Help yourself with history. Too often we are mired in a static view of where an organization is now, rather than having an appreciation for how it has changed, and continues to change, over time. Even in a “young” state like Alaska, there is rich historical data we can learn from.
  • Boost your evaluation planning by gathering information on your/the client organization’s “story” from a variety of sources. Talk to emeritus professors, compare the org chart of today to past decades, and comb through newspaper archives. Becoming familiar with past waves of change is very helpful in understanding the meaning behind current missions, goals and structures (and people’s attachments to them).

Hot tip: Communicate about communication! Add a question about communication preferences to your next needs assessment. Don’t assume you know what level of technology and form(s) of interaction your colleagues and clients are comfortable with. Before you do a survey, figure out what modes of communication the target population values. For example, if oral history is a large part of a sample group’s culture, how well will a paper and pencil form be received?

Rad Resources:

  1. The National Communication Association (NCA) can help you step up your message design game. Take advantage of free advice from experts on verbal and nonverbal communication by reading NCA’s newsletter, Communication Currents.
  2. AnyMeeting is a freetool that you can use to reach a wider audience. With it, you can host online meetings and make instructional videos, both of which are really handy when working in a geographically diverse setting. AnyMeeting also has screenshare clarity in its recordings that Google Hangouts lacks.

The American Evaluation Association is celebrating Alaska Evaluation Network (AKEN) Affiliate Week. The contributions all this week to aea365 come from AKEN members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Melissa Cater, and I am an assistant professor and evaluation specialist at Louisiana State University AgCenter.

The purpose of the Evaluation CoP is to support people within the Cooperative Extension system who are interested in evaluating Extension programs. While the focus of our group is supporting Extension evaluation efforts, the information is applicable to many program evaluators. Evaluators working in other nonformal educational settings may find useful resources as well as a place for engagement with like-minded professionals.  AEA Topical Interest Groups (TIGs) or new CoPs may consider a similar approach of engaging core members to build synergy with a larger group.

Rad Resource:

One of the resources provided by the Evaluation CoP is a list of 97 frequently asked questions. Answers to questions related to topics like evaluation design and data analysis are provided. Links to archived webinars on evaluation topics ranging from impact reporting to the use of photolanguage are also accessible.

Clipped from http://www.extension.org/program_evaluation/faqs?page=1

Rad Resource:
The core mission of a CoP is to provide a place to share information and experiences among group members. Providing structures to advance group communication is key to group success. The Evaluation CoP offers members an opportunity to connect via social media.

Our CoP Facebook page provides a platform for members to connect, to pose questions to the group, and to share success stories. Twitter offers another means of connection; interested individuals may follow us at @EvalCoP. Our group also sponsors a blog, the eXtension Evaluation Community Blog. While our blog is in its infancy, our vision is for contributors to supply more in-depth evaluation information.

Lessons Learned:

Our Evaluation Community of Practice story illustrates Wenger’s (1998) principles of CoP structure. These principles, mutual engagement, joint enterprise, and shared repertoire (pp. 72-73), are easily applicable to AEA TIGs or to any other group of evaluators who share a common evaluation interest.

  • Mutual Engagement. Shared work has helped our members build relationships that span the nation. This social network forms the foundation of our work as a CoP. We have discovered both our individual and shared strengths and use these to benefit the CoP’s work.
  • Joint Enterprise. By striving together to improve our evaluation practice within the Cooperative Extension system, we have built a shared understanding of our work. We recognize the commonalities we share with other evaluators as well as the unique aspects of our craft as Extension evaluators.
  • Shared Repertoire.  Our ongoing work as a collaborative community, creating resources like the frequently asked questions and webinars, has resulted in shared work and shared products.

In the words of Henry Ford, “Coming together is a beginning. Keeping together is progress. Working together is success.”

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

·

Hi!  I’m Sarah Baughman, Evaluation and Research Leader for eXtension.  eXtension is the virtual presence of the Cooperative Extension System. We work with faculty from land grant universities to provide unbiased, research-based educational resources across a wide range of topics including feral hogs, fire ants, and families through our 70 communities of practice. A major aspect of my daily work is assisting communities of practice with evaluation efforts.  Although most faculty are familiar with evaluation basics the virtual work environment tends to confound faculty.

Hot Tip – Back to Basics – When working with faculty on evaluations for programs that involve social media and/or web based resources I take them back to the basics.  I help them situate their social media and virtual “tools” into the context of their programs by asking lots of questions that point back to evaluation basics such as programmatic mission, purpose and goals.  Why are they tweeting?  What do they hope to achieve through by integrating social media into their programs?

Lesson Learned – Capacity building is an on-going process.  The landscape of our work changes rapidly with new faculty on board, new technologies developed and new communities of practice forming.  As one faculty member embraces evaluation as a critical component of their work, another community of practice changes leadership necessitating renewed capacity building efforts.

Lesson Learned – Another key for working with faculty immersed in their disciplines is to show them how evaluation methodologies are similar to their research methods.  The purpose of evaluation is different than research but the methodologies are the same.

Rad Resource – Google + Hangouts have proven to be an invaluable resources for one on one or group meetings.  Hangouts are free video conferences that allow screen sharing and are mobile device friendly so busy faculty can meet from almost anywhere.  The screen sharing allows me to walk through tools with them or troubleshoot issues that are difficult to describe in other contexts.

Rad Resource – There is a lot of information on social media marketing and measurement but it is primarily aimed at for-profit businesses.  In the world of education and non-profits the goals and outcomes can be fuzzy.  Measuring the Networked Nonprofit by Beth Kanter and Katie Delahaye Paine does an excellent job of describing the importance of measuring social media and more importantly, how measurement can help change practice.

Rad Resource – One of the eXtension communities of practice is devoted to helping Extension professionals evaluate programs and demonstrate programmatic impacts. Their work is helpful to non-formal or informal educators, non-profits and anyone working on evaluation in a complex, decentralized environment. Connect with us at @evalcop or blogs.extension.org/evalcop/.

Clipped from http://blogs.extension.org/evalcop/

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Older posts >>

Archives

To top