AEA365 | A Tip-a-Day by and for Evaluators

CAT | Extension Education Evaluation

My name is Pam Larson Nippolt and I am a University of Minnesota Extension Evaluation and Research Specialist working with a team of program evaluators in 4-H youth development programs.

Lesson Learned: Monitoring enrollment data is often a data-related activity that falls under the umbrella of program management.   Monitoring enrollment data enables program leaders to pay attention to some aspect of program implementation via inputs or outputs. What is monitored can be quite distinct, but it still can inform the focus of an evaluation or measurement of an outcome.

When planning with program teams, I use the example that monitoring is similar to setting a metronome while playing piano–it keeps a steady beat going to help the pianist stay in tempo. Evaluation, on the other hand, is the assessment the pianist and audience make about the music created.12

Lesson Learned: Collecting, maintaining, and analyzing data for monitoring purposes are an investment of time and resources that can pay dividends for evaluation in the long run!

Enrollment databases, used in many large youth development programs, are excellent data sources for program monitoring, but are often overlooked. For example, in 4-H, program data (shown below) revealed that a region with the largest Metropolitan area (Central Region) enrolled more youth from farms and small towns than what had been believed to be the case.

aea11

This finding seemed to be counter-intuitive and led to further investigation of the data. We discovered that many youth living in the city and participating in the program were not in the enrollment database because of a particular enrollment practice.

Monitoring the enrollment data led to an awareness about the need to make the process more accessible for all youth and families.   Program staff may not have identified the scale of this discrepancy without this type of monitoring.

Hot Tip: Get started by “whetting the appetite” of your program partners for data use with available data about the program and participants. Build appealing and visually engaging graphics to make the using the data rewarding to staff who don’t typically attend to data. Ask questions and listen to how they make sense of the data. This practice will reveal what can be monitored “right now” for team learning.

Rad Resource:  Consider investing in making your enrollment database more usable and accessible to staff with trend and comparison features. Interfaces can be designed for your enrollment software that provide a dashboard with menus to track changes over program years and geographic comparisons. Think like an interface designer to create tools and reports that will help program staff love their data!

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE AEA Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Brigitte Scott and I am the Evaluation and Research Specialist for the Military Families Learning Network (MFLN), which engages military family service professionals in high-quality, research-based professional development. The MFLN is part of the Department of Defense (DoD) – U.S. Department of Agriculture / National Institute for Food and Agriculture Partnership for Military Families (USDA/NIFA) and is also a part of eXtension—the online branch of America’s Cooperative Extension System (CES). Evaluation for the MFLN comes with a few challenges—leadership, PIs, and staff are spread out across the country; our cooperative funding agreement requires nimble and flexible programming (Hello, developmental evaluation!); and constituents in multiple institutions have different ways of communicating and varied reporting needs.

Lesson Learned: When I first began working with MFLN, I drew heavily on my background in qualitative methods, and all of my mixed methods reports took on a narrative form. However, the reports weren’t getting read. With competitive funding forever at stake in an era of sequestration, this had to change.

Enter data visualization. At AEA 2014, I took a two-day data viz workshop with Stephanie Evergreen. It was invaluable! My reports are still works in progress, but I know now they are being read. How? Folks are actually contacting me with questions! My reports are getting circulated at DoD, which has meant increased awareness of MFLN and a lot of kudos for our work. (It doesn’t hurt come budget time, either.) PIs and staff are utilizing the reports to discuss their progress against dynamic plans of work while focusing on the moving target of program innovation.

Hot tip: CES just celebrated its 100th birthday last year, but make sure your reports aren’t dinosaurs! Your reports—your efforts!—need to be seen and heard to be actionable. I like to think of CES as power to the people. If you agree with me, then give data viz a try to get your points across and support CES in making a difference in counties across the nation.

Hot tip: Data visualization isn’t all about Excel. Arrange key verbal points on a page with clean, clear data. Pull out a thread from a data story and expand it in a text box, or pick up qualitatively where your quantitative story said its piece.

Hot tip: Font and color matter. Use your organization’s visual identities in your reports to let readers know that your report concerns them and their work.

Rad resource: Check out AEA’s offerings on data visualization, including workshops, coffee breaks, and of course, the annual meeting data viz sessions. They really are amazing!

Rad resource: Stephanie’s workshops are a must, but so is her book. Check them both out!

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE AEA Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Salutations from the Land of the Midnight Sun. My name is Alda Norris. I am an evaluation specialist for the University of Alaska Fairbanks Cooperative Extension Service and webmaster for the Alaska Evaluation Network.

There is a lot of activity packed into a single word when you say “evaluation” or “extension.” Have you ever had someone stare at you blankly when you tell them your job title? My background is in the study of interpersonal communication, and I believe developing skills in providing effective comparisons will boost our ability to explain “what we do” to others.

 Hot Tip: A three-step pattern I learned from speech class can be very helpful.

  1. Define the term.
  2. Give examples of what it is.
  3. Give examples of what it is not.

Also, your audience will gain a deeper understanding if the examples you use are surprising. Here’s one from our state sport: Many people hear the term “sled dog” and think of a big fluffy Siberian Husky. However, many purebred Siberians are show dogs not used for mushing. Sled dogs are more commonly of a mixed heritage known as Alaskan Husky, and some are crossed with other breeds like Greyhound or Pointer!

Lesson Learned: Clients may make demands that seem unreasonable because they misunderstand the scope of your expertise or duties. Even worse, they may not seek you out at all because they don’t see a link between your title and what they need. If you’ve ever had someone think evaluation is “just handing out a survey” or extension is “just agriculture stuff” then you know what I mean! Take the time to do some awareness-raising with your target audience.

Hot Tip: Strip away the professional jargon and think about what words the public would use to describe you. Make sure those terms are included on your web page so that search engines will associate you with them. If you haven’t already, add an “About” or “FAQs” page that addresses what you do (and don’t) have to offer.

Rad Resources: Books like Eva the Evaluator are great for providing examples and comparisons of what jobs like “evaluator” entail. Maybe someone will write an Ali the Extension Agent book someday! Also, search the AEA365 archives for related discussions on the difference between evaluation and research, and how to use metaphors to extend understanding.

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE AEA Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

I am Scott Chazdon, Evaluation and Research Specialist with the Extension Center for Community Vitality, University of Minnesota. I have gained skills in a process known as Ripple Effect Mapping (REM) to document impacts of Extension community development programs. REM sessions often spur important thinking, connections and work.

REM is a participatory group method that engages program and community stakeholders to retrospectively and visually map the chain of effects resulting from a program or complex collaboration. The REM process combines elements of Appreciative Inquiry, mind mapping, group interviewing, and qualitative data analysis. It is a powerful tool for documenting both the intended and unintended results of a program. It is also a way to engage and re-energize program participants and stakeholders around shared goals.

Rad Resource: A more in-depth introduction to REM is at University of Minnesota Extension feature article on REM – “Ripple effect mapping makes waves in the world of evaluation”

Lesson Learned: What started as a great method for evaluating community leadership programs morphed into a tool for a broad range of programs.

In Minnesota, an effort to document the impact of urban Master Gardeners working in the neighborhoods became a more inclusive and community-driven project that showcased the many different outcomes of the program that may have been overlooked.   Here is a thumbnail graphic of the core section of the Ripple Effect Map from that project.

Rad Resource: You can find full-sized REM graphics at this site University of Minnesota Extension REM Blog

AEA2

Lesson Learned: Recruiting the right number and mix of people is crucial in Ripple Effect Mapping. In terms of numbers, these are larger than focus groups, but if you go beyond 20 people you may not be able to include all voices in the process. I prefer groups of 12 to 20 people.

You can invite both direct participants and non-participant stakeholders. This non-participant group can include funders, local elected officials, other influential figures, or representatives of the media.

Lesson Learned: This mix of people creates an insider-outsider dynamic that sometimes leads to game-changing insights about efforts that have already happened, as well as efforts that could happen! That’s why Ripple Effect Mapping makes sense as a developmental evaluation tool.

Rad Resources: To find out more about REM and approaches that can be taken, as well as if might be a tool you can use, take a look at these two articles: 1) Journal of Extension — Using Ripple Effect Mapping to Evaluate Program Impact: Choosing or Combining the Methods That Work Best for You and 2) Journal of Extension — Ripple Effect Mapping: A “Radiant” Way to Capture Program Impacts

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE AEA Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi! I’m Mary Arnold, a professor and 4-H youth development specialist at Oregon State University, where I spend the majority of my time in the area of program evaluation, especially in capacity building efforts. This is my second time preparing a blog post for the EEE-TIG, and the invitation came at a great time, because I have been thinking pretty obsessively these days on how we can do a better job of building Extension program planning and evaluation capacity. One of the conundrums and persistent late night ponderings that continues to rattle around my mind is how we can do a better job articulating what is suppose to take place in programs. If we are clear on of what is supposed to happen in a program, then we also should be able to predict certain outcomes and understand exactly how those outcomes come to be. This notion of prediction is what underscores a program’s theory.

Because of the emphasis on program planning and that swept Extension in the early 2000s, most Extension educators are familiar with logic modeling. The good news is that   many educators understand the concepts of inputs, outputs, and outcomes as a result, so the groundwork is in place to think more deliberately about a program’s theory. But at the same time, there is scant evidence that logic modeling has resulted in better program planning practices, or led to the achievement of stated outcomes in Extension programs. And there is even less evidence that logic models are developed based on theory.

Lesson Learned: Theory may be implied in logic models, but too often it is understated, assumed, or just hoped for. Program theory is what connects the components of a logic model and makes it run!

Hot Tip! Did you know that there are two important parts to program theory? The first is the program’s theory of change, which is the way in which the desired change comes about. The second is the program’s theory of action, which refers specifically to what actions need to happen, at what level of success, for the program to reach its intended outcomes.

Rad Resource! My favorite resource for understanding and developing a program theory of change and action is Purposeful program theory: Effective use of theories of change and logic models (Funnell & Rogers, 2011). This book has loads of great information and practical help on bringing logic models to life with program theory.

Rad Resource! If you are looking for specific theories that are useful for Extension programs, The University of Maryland Extension has a terrific short guide entitled Extension Education Theoretical Framework that outlines how several well-developed theories can be useful for Extension programming.

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE AEA Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Teresa McCoy and I work as an Assistant Director of University of Maryland Extension (UME) with responsibility for evaluation and assessment. Like many Extension colleagues, my evaluation department consisted of me until last year when I was able to hire an additional person. The two of us have responsibility across all program areas in our organization, and I am always looking for technologies that can help save time while adding quality to evaluation efforts.

This past year, I needed to conduct about 25 interviews. I was faced with hours of conversation that would have to be transcribed and analyzed without assistants to help.

Several people suggested, given that I’m at a university, to “Just hire an undergraduate student. It shouldn’t cost you very much that way.” Well, I don’t know about you, but if I have spent countless hours preparing questions, designing a protocol, and contacting and scheduling interviews, I am not about to hand over transcription duties to the first student “off the street.”

Football solved my problem. I know that’s hard to believe, but while I was at a Baltimore Ravens football game party with friends, I was chatting with an education policy analyst. She told me about TranscribeMe!™ and her good experiences with the company and the product.

Lesson Learned: Hot tips and rad resources often are found at unlikely places!

Lesson Learned: After some investigation, I found out that TranscribeMe!™ and NVivo™ have a business partnership. I was able to upload my audio recordings from within NVivo™ (after setting up my account) and the transcripts were then sent back to me and into my NVivo™ project file. In the media options, there is a “purchase transcript” option.

purchase transcript and check status buttons

To clarify, you can use TranscribeMe!™ without having to use NVivo™. However, given that I was using NVivo™ for my coding, these two products made the initial work a lot easier and faster. I received some of the transcripts within 24 hours and almost all of them within 48. The transcript quality was excellent. And, as I am sure you’re wondering, the price was good (Price is negotiable depending on quantity of work, number of speakers, and other options.)

You can use the app on your smart phone to record. No special equipment needed.

Rad Resource: TranscribeMe!™ and NVivo™ partnership. TranscribeMe!™ at www.transcribeme.com

Rad Resource: Information from QSR about using NVivo and TranscribeMe!http://www.qsrinternational.com/products_nvivo_transcription-services.aspx

If you’re a football fan like me, now you have a great excuse to watch the games because you never know when you’ll find your new evaluation rad resource!

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE AEA Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

My name is Cheryl Peters and I am the Evaluation Specialist for Michigan State University Extension, working across all program areas.

Measuring collective impact of agricultural programs in a state with diverse commodities is challenging. Many states have an abundance of natural resources like fresh water sources, minerals, and woodlands. Air, water and soil quality must be sustained while fruit, vegetable, crop, livestock and ornamental industries remain efficient in yields, quality and input costs.

Extension’s outreach and educational programs operate on different scales in each state of the nation: individual efforts, issue-focused work teams, and work groups based on commodity types. Program evaluation efforts contribute to statewide assessment reports demonstrating the value of Extension Agricultural programs, including public value. Having different program scales allows applied researchers to align to the same outcome indicators as program staff.

Hot Tip: Just as Extension education has multiple pieces (e.g., visits, meetings, factsheets, articles, demonstrations), program evaluation has multiple pieces (e.g., individual program evaluation about participant adoption practices, changes in a benchmark documented from a secondary source, and impact assessment from modeling or extrapolating estimates based on data collected from clientele).

Hot Tip:  All programs should generate evaluation data related to identified, standardized outcomes. What differs in the evaluation of agriculture programs is the evaluation design, including sample and calculation of values. Impact reports may be directed at commodity groups, legislature, farming groups, and constituents. State Extension agriculture outcomes can use the USDA impact metrics. Additionally, 2014 federal requirements for competitive funds now state that projects must demonstrate impact within a project period. Writing meaningful outcomes and impact statements continues to be a focus of USDA National Institute of Food and Agriculture (NIFA).

Hot Tip: Standardizing indictors into measurable units has made aggregation of statewide outcomes possible. Examples include pounds or tons of an agricultural commodity, dollars, acres, number of farms, and number of animal units. Units are then reported by the practice adopted. Dollars estimated by growers/farmers are extrapolated from research values or secondary data sources.

Hot Tip: Peer-learning with panels to demonstrate scales and types of evaluation with examples has been very successful. There are common issues and evaluation decisions across programming areas. Setting up formulas and spreadsheets for future data collection and sharing extrapolation values has been helpful to keep program evaluation efforts going. Surveying similar audiences with both outcomes and program needs assessment has also been valuable.

Rad resource: NIFA  provides answers to frequently asked questions such as when to use program logic models, how to report outcomes, and how logic models are part of evaluability assessments.  

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi! This is Laura Downey with Mississippi State University Extension Service. In my job as an evaluation specialist, I commonly receive requests to help colleagues develop a program logic model. I am always thankful when I receive such a request early in the program development process. So, I was delighted a few weeks ago when academic and community colleagues asked me to facilitate the development of a logic model for a grant proposing to use a community-based participatory research (CBPR) approach to evaluate a statewide health policy. For those of you who are not familiar with CBPR, it is a collaborative research approach designed to ensure participation by communities throughout the research process.

As I began to assemble resources to inform this group’s CBPR logic model, I discovered a Conceptual Logic Model for CBPR available on the University of New Mexico’s School of Medicine, Center for Participatory Research, website.


Clipped from http://fcm.unm.edu/cpr/cbpr_model.html

Rad Resource:

What looked like a simple conceptual logic model at first glance was actually a web-based tool complete with metrics and measures (instrument) to assess CBPR processes and outcomes. Over 50 instruments related to the most common concepts in CBPR, concepts such as organizational capacity; group relational dynamics; empowerment; and community capacity are profiled and available through this tool. The profile includes the instrument name; a link to original source; the number of items in the instrument; concept(s) original assessed; reliability; validity; and identification of the population created with.

With great ease, I was able to download surveys to measure those CBPR concepts in the logic model that were relevant to the group I was assisting. Given the policy-focus of that specific project, I explored those measures related to policy impact.

Hot Tip:

Even if you do not typically take a CBPR approach to program development, implementation, and/or evaluation, the CBPR Conceptual Logic Model website might have a resource relevant to your current or future evaluation work.

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi, I’m Siri Scott, and I work as a Graduate Assistant with the University of Minnesota Extension.

Conducting interviews with youth is one way to consider gathering information for program improvement, especially when you want to bring youth voice into your improvement process. While more time intensive than doing a survey, interviews will provide you with rich contextual information. Below is a brief overview of the planning process as well as tips for conducting interviews.

Hot Tips: Planning Phase

One main decision is whether or not you will need IRB approval for conducting interviews. Even when done for program improvement purposes, it is a good idea to comply with IRB regulations for data practices and protection for youth. Other major decision points in the planning process include how many individuals you will interview, how you will choose your participants, and how you will collect and analyze your data. In addition, you must decide what type of interview you want to conduct, the purpose of the interview, and then create an interview protocol (if appropriate).

Hot Tips: Conducting interviews

Here are some tips for conducting interviews:

  • Practice: Test the use of the protocol with a colleague (or a young person who you know well) and ask for feedback about the questions themselves and how they fit together.
  • Space: Find a quiet, secluded space for the interview in a public setting (or in a private home if the young person’s guardian can be present). You don’t want other people overhearing your conversation or your participant being distracted by anything.
  • Warm up: Start the interview with some informal chit chat. This will build your rapport with the participant and ease the participant’s (and your) nerves.
  • Probe: If you are not doing a structured interview, make sure to ask participants to clarify or elaborate on their responses. This will provide you with much better data.
  • Notes: If you are using an audio recorder, don’t trust it (fully). Jot down some quick notes during the interview. You can elaborate on these later if the audio recorder malfunctions.
  • Relax! If you mess up, that’s okay. Also, if you’re nervous and tense, the participant will sense that. Do whatever you can to put the participant (and yourself) at ease.
  • Learn More:  A good resource for learning about how to conduct interviews is this newly released, comprehensive overview of the interviewing process:  InterViews: Learning the Craft of Qualitative Research Interviewing (3rd Edition) by Brinkman and Kvale (2014).

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Howdy! I am Kevin Andrews, a program specialist at Texas A&M AgriLife Extension Service. In addition to my Extension duties, I co-teach a graduate evaluation course at Texas A&M University.

I came across a post from March about students partnering with community agencies to apply their evaluation skills. I’d like to build upon Dr. Brun’s idea for evaluators who have ties to a university, especially those in Extension.

Many of our students have no idea what extension (or any other agency) is. Any engaged university seeks to tie together the scholarships of teaching, research, and service, and hands-on evaluations are a perfect way to accomplish this.

Lessons Learned: By allowing students to partner with us on evaluations, they not only receive practical experience and make an impact, they also get to learn who we are. This can aid in recruiting talented students to work for the agency; we’ve had several ask about careers in extension.

Hot Tip: Students are going to ask a lot of questions. We can get pretty set in our ways and think we know our agency well. When you have to pause to explain why we do what we do in basic terms, you are forced to reflect on exactly why it is we have been doing things a certain way all these years!

Hot Tip: Our employees just want their voices heard. With students conducting interviews we get far more coverage than a single evaluator using a sample, and employees are able to feel their opinions matter. Our staff is also much more likely to be open with a student than they are a peer.

Lessons Learned: I like to be in total control over my projects, but part of delegating work is letting others do their own thing. By developing goals together early in the project, I can ensure the outcome is as I intended while allowing students to experiment and develop their own processes.

Hot Tip: Often, when a class is over, the student-teacher relationship ends. Keep contact information and follow up with students a year later to let them know the impact of their work. No matter where life takes them, they are your stakeholders and you want them to hold you in high esteem.

Lessons Learned: I’m lucky to get to straddle teaching and Extension. For those who don’t simply reach out and ask! I’ve been approached by others with projects for students, and I’ve approached others with projects of my own. Everyone has something they need done!

Two years ago, I was the student participating in a class evaluation. Three from my class, including myself, now work for Extension and our report generated $200,000 of funding – the model works!

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Older posts >>

Archives

To top