AEA365 | A Tip-a-Day by and for Evaluators

TAG | statistics

I’m Tamara Young and I am an associate professor in Educational Evaluation and Policy Analysis at North Carolina State University.  I teach evaluation theory and practice in education. Today, I’m going to discuss the American Statistical Association’s (ASA) Statement on p-values, which responds to the decades old highly contentious debate about null hypothesis statistical significance testing (NHSST). Implications of the debate and ASA response for the evaluation community are also described.

The Debate

The NHSST process is flawed and there is widespread “misconceptions and misuse” of NHSST. As Ronald Wasserstein and Nicole Lazar explain in their editorial on the Context, Process, and Purpose of the ASA statement on p-values, NHSST has faced serious critique for decades. In recent years, Tom Sigfried has called attention to flaws of  NHSST, describing the process as “science’s dirtiest secret”, and concluding “statistical techniques for testing hypotheses …have more flaws than Facebook’s privacy policies.” Even the journal Basic and Applied Psychology banned NHSST.

Hot Tip: The Current Resolution

In 2016 The American Statistical Association issued a statement  delineating six principles (directly quoted below) that should guide use and interpretation of p-values, which ultimately will improve practice and move us into a post “p < .05 era”:

  1. “P-values can indicate how incompatible the data are with a specified statistical model.”
  2. “P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.”
  3. “Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold.”
  4. “Proper inference requires full reporting and transparency.”
  5. “A p-value, or statistical significance, does not measure the size of an effect or the importance of a result.”
  6. “By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis.”

Implications for the Evaluation Community

Evaluators who utilize NHSST need to become more familiar with the debate about NHSST and read about the ASA’s six guiding principles. Instructors of quantitative methods need to discuss the debate and provide students opportunities to critically reflect upon the ASA’s principles and apply them to data analysis simulations. Additionally, the evaluation community, especially journal editors, need to encourage the use of other methods (e.g., Bayesian methods) that can function as alternatives or supplement NHSST. Lastly, funders, decision-makers, and evaluators need to consider the ASA principles when designing, interpreting, and using results.

Rad Resources:

Statistical errors: P values, the ‘gold standard’ of statistical validity, are not as reliable as many scientists assume.

Odds Are, It’s Wrong: Science Fails to Face the Shortcomings of Statistics

The ASA’s Statement on p-Values: Context, Process, and Purpose which includes the ASA statement, online supplemental materials related to NHSST, and alternatives to NHSST.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

Hi, I’m Heather Krause a mathematical statistician, data scientist and founder of Datassist.  After many years collecting data, conducting analysis, and designing data communication material across the globe it became really clear to me that data is never neutral.  And that much of what we consider to be an objective science with “best practices” is often simply one world view among many.  This is true even in something that appear genuinely value-free like math.

Lessons Learned:  Math is math, right? Two is always two.  Except when it’s not.

Let’s say we’re doing some research in the education sector and we want to talk about how average class size affects outcomes. We study three classes. Take a look at the image below and calculate the average class size.

The average class size at this school depends entirely on who you ask.

Even though there is nothing challenging or complex about the math involved in this question, we still can’t count on objective data analysis. Why? Because the “correct” answer depends on your worldview. Let’s look more closely.

If we ask the students how many students are in a class, we get the following answers:

Asking students how many in the class

Now let’s ask the professors how many students are in a class.

asking professors how many in class

The first professor reports one student.  The second professor says there are two students in a class, and the Class Three professor says there are four students per class.

The average class size depends entirely on whose point of view you’re taking. That is, where you put the locus of power (or centre of power) in your analysis — on the professors or on the students.

How often do we automatically put the centre of power in a specific place and simply assume that it’s correct. (Not that it’s necessarily incorrect — but it’s not the only option.)

Let’s look at the math.

showing mathematics of different points of view

Both answers are technically correct. The math is sound. But how does that work? The answer to the question “what’s the average class size?” depends on whether you’re a teacher or a student. And that’s why objective data analysis isn’t really a thing — because there will always be assumptions you need to make, and making assumptions removes objectivity.

Hot Tip: Every time you do an analysis or a calculation with your data, take five minutes and ask yourself:

  • Where have I put the center of power in this calculation?
  • Whose perspective could change this calculation?
  • Can I come up with an entirely different yet also correct answer?

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Greetings AEA community, I’m Pei-Pei Lei, a biostatistician in the Office of Survey Research at the University of Massachusetts Medical School. Have you been looking to expand your skill set in statistical programming? Have you wondered if R is the appropriate statistical software package for your needs? The purpose of this post is to help you decide whether R is right for you and, if so, how you can get started using it.

R may be the right tool if you:

  • Need to manage and/or analyze quantitative data
  • Are looking for a free alternative to commercial software packages, such as SAS, SPSS, and STATA
  • Don’t mind writing computer code – does print (“Hello, world!”) look easy enough to you?
  • Want to create nice-looking and informative figures and graphics (see this website for example)

If you’re not sure, here are some places for you to get a feel for R language:

  • TryR: This website provides online interactive step-by-step practice on the webpage
  • DataCamp: This website provides online interactive step-by-step practice (more material than TryR)

Hot Tips:

The following is a list of MOOCs (Massive Open Online Courses) that can help you learn R for free (or pay a fee for a verified certificate):

  • R programming on Coursera: It’s a 4-week course to go through basic R programming knowledge. It provides a weekly quiz and a final project for you to test your skills. Good for beginning to intermediate users.
  • Introduction to R for Data Science on edX: It’s a self-paced 4-week course to go through basic R programming knowledge. This course is using DataCamp for class materials and exercises. Good for beginners.
  • R Basics – R Programming Language Introduction on Udemy: This is a self-paced course that goes through basic set up such as downloading the software and coding. Good for beginners.
  • Data Analysis with R on Udacity: This course takes about 2 months to finish (it’s also part of the Data Analyst nanodegree program). Its tutorial videos show coding processes in RStudio. Good for beginning to intermediate users.

You can also install the Swirl R package to learn R in R. It gives you interactive instructions for different topics. This is good for intermediate users.

Rad Resources:

  • R-bloggers: This is a repository of R-related articles, including tutorials. You can subscribe to the mailing list to receive the latest articles.
  • Stack overflow: This is a forum where you can post your question and get answers, or even better, provide answer to others’ questions!

Lessons Learned:

Don’t be intimidated by the many choices you have in learning R. They are the means to reach your goal. So pick one that you like and get started!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi there – my name is Jennifer Catrambone and I am the Director of Evaluation & Quality Improvement at the Ruth M Rothstein CORE Center in Chicago, Illinois. That’s an Infectious Disease Clinic specializing in HIV/AIDS. I’m presenting on my favorite nerdy topic – the what and how of Nonparametric Statistics. I’ve taught both parametric and nonparametric stats at the graduate and undergraduate levels and have done stats consulting. Hang on!! Before you go running away because I used the word Statistics a bunch of times already, let me get a couple more lines out.

It hurts my soul (not like sick puppies or mullets, but still…) when people just reach for the parametric stats, e.g., ANOVAs, T Tests, etc…, without thinking carefully about whether those are the best ones for their data. Why? Because those tests, the parametric ones that we all spent all that time learning in school, are sometimes wildly inappropriate and using them with certain very common kinds of data actually decreases your likelihood of finding that sought-after p<.05. The trick is to match your data set, with its imperfections or unpredictable outliers, to the right kind of stats.

Lesson Learned: So, what situations require nonparametric statistics? They can be broken down into a few major categories:

  1. The data set is very small. Sometimes that N just does not get to where we want it to be.
  2. The subgroups are uneven. Perhaps there are many pretests and very few post tests, or maybe you let people self-select which group they were in and no one chose the scary sounding one.
  3. The data is very skewed. Bell Curve, Schmell Curve.
  4. Your variables are categorical or ordinal.

There aren’t a lot of resources on Nonparametric Statistics out there. College/grad school statistics textbooks offer minimal information on nonparametric stats, focusing disproportionately on Chi Squares but rarely include info on the post hoc tests that should follow that test. One excellent Nonparametric Stats resource, though published in 1997, is by Marjorie Pett and is entitled, “Nonparametric Statistics for Health Care Research.” The popular stats texts by Gravetter and Wallnau have also introduced decision trees for nonparametric stats that are incredibly useful for determining what test to use.

OK – so all of that being said, the bad news is that many of us just use Parametric Stats because that’s what we know, regardless of the data, and accept that with our messy data, effects will be harder to come by. The great news is that that’s not necessary. Nonparametrics take all that into account and slightly modifies parametric tests (e.g., using medians instead of means), making it so that things like skew and tiny samples are not effect-hiding problems anymore.

Want to learn more? Register for Nonparametric Statistics: What to Do When Your Data Breaks the Rules at Evaluation 2015 in Chicago, IL.

This week, we’re featuring posts by people who will be presenting Professional Development workshops at Evaluation 2015 in Chicago, IL. Click here for a complete listing of Professional Development workshops offered at Evaluation 2015. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! I’m Kathy McKnight, Principal Director of Research, Center for Educator Effectiveness at Pearson.

Today I completed my annual 2-day introductory workshop on Quantitative Methods, which I’ve offered at AEA’s annual conference every year since….well, I’ve lost track. Over the years, I’ve observed a lot of evaluators who participate in my workshop, hungry to learn something about statistics and quantitative methods.

Lessons Learned: A few observations to share: 1) It’s difficult for program evaluators to find quality workshops/educational opportunities for continuing their education in quantitative methods; I find this is the case for those at an introductory, intermediate, and advanced level, unless you’re located within a university (and even then, it’s not guaranteed you can find what you need). 2) I’m further convinced each year that training in statistics is not enough — evaluators need training in measurement and research methods/evaluation design as well. Without each of those critical elements, knowledge of any one of them alone is not sufficient. I’ve noticed that the greatest engagement in my workshop tends to be around methodological/philosophy of science issues with respect to how program evaluations are carried out, and what we can learn from them. Studying statistics helps bring out these issues: it’s not only about what tools are available, but how we can best use them, given our evaluation goals. These issues are what attracted me to program evaluation and keep me interested in this work. It seems to be the case for many others.

Hot Tips: For those interested in furthering their knowledge and skills in quantitative methods, AEA has a Quantitative TIG, and the good news is, we don’t bite! It’s a supportive, engaged group of individuals who share a strong interest in the methods by which we conduct evaluations, how we measure constructs we care about, and how we model relationships between those variables quantitatively. New members could help us identify ways to provide more and better training to our membership, and share resources. Additionally, AEA offers e-Studies (I offered one this past spring on basic inferential statistics) and “coffee break webinars” (brief presentations of a specific topic — I offered one on descriptive statistics). These are just a few of the online resources available to our membership*. The annual meeting also offers 1-day, 3-hour and 90-minute workshops, and a host of presentations focused on quantitative methods. These are well worth checking out as part of your continued education in the broad area of quantitative methods.

Rad Resource: Don’t forget your friend the internet — there are countless YouTube videos and statistics, measurement, and research methods websites that provide tutorials as well as a multitude of resources.

I wish you all a productive, educational conference this year in Washington DC! Please do check out the presentations from the Quantitative TIG.

*Coffee break webinars, e-Study workshops, and Professional Development workshops at the conference are paid content.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I’m Jennifer Ann Morrow, a faculty member in Evaluation, Statistics, and Measurement at the University of Tennessee. I created a 12 Step process evaluators can follow to ensure their data is clean prior to conducting analyses.

Hot Tip: Evaluators should follow these 12 steps prior to conducting analyses for evaluation reports:

1. Create a data codebook

a. Datafile names, variable names and labels, value labels, citations for instrument sources, and a project diary

2. Create a data analysis plan

a. General instructions, list of datasets, evaluation questions, variables used, and specific analyses and visuals for each evaluation question

3. Perform initial frequencies – Round 1

a. Conduct frequency analyses on every variable

4. Check for coding mistakes

a. Use the frequencies from Step 3 to compare all values with what is in your codebook. Double check to make sure you have specified missing values

5. Modify and create variables

a. Reverse code (e.g., from 1 to 5 to 5 to 1) any variables that need it, recode any variable values to match your codebook, and create any new variables (e.g., total score) that you will use in future analyses

6. Frequencies and descriptives – Round 2

a. Rerun frequencies on every variable and conduct descriptives (e.g., mean, standard deviation, skewness, kurtosis) on every continuous variable

7. Search for outliers

a. Define what an outlying score is and then decide whether or not to delete, transform, or modify outliers

8. Assess for normality

a. Check to ensure that your values for skewness and kurtosis are not too high and then decide on whether or not to transform your variable, use a non-parametric equivalent, or modify your alpha level for your analysis

9. Dealing with missing data

a. Check for patterns of missing data and then decide if you are going to delete cases/variables or estimate missing data

10. Examine cell sample size

a. Check for equal sample sizes in your grouping variables

11. Frequencies and descriptives – The finale

a. Run your final versions of frequencies and descriptives

12. Assumption testing

a. Conduct the appropriate assumption analyses based on the specific inferential statistics that you will be conducting.

Lesson Learned: One statistics course is not enough. Utilize all the great resources that AEA offers to gain additional training in data analysis.

Rad Resources:

Want to learn more from Jennifer? Register for her upcoming AEA eStudy: The twelve steps of data cleaning: Strategies for dealing with dirty data and her workshop Twelve Steps of Data Cleaning: Strategies for Dealing with Dirty Evaluation Data at Evaluation 2013 in Washington, DC.

This week, we’re featuring posts by people who will be presenting Professional Development workshops at Evaluation 2013 in Washington, DC. Click here for a complete listing of Professional Development workshops offered at Evaluation 2013. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Greetings, I am Cindy Weng, a bio-statistician II employed at Pediatrics Research Enterprise at Department of Pediatrics at the University of Utah. This post was written together with my colleagues Chris Barker, SWB project manager and Larry George, statistician at Problem Solving Tools.

I learned about this methodology through a project assigned by ASA Statistics Without Borders (SWB) in 2011. The goal of this project was to analyze under 5 years (U5) mortality of children before (“baseline”) and after (“endline”) humanitarian aid given at Afghan refugee Camps in Pakistan. Survival analysis was used to estimate the probability distribution of age at death from current status and admissible age-at-death data. Inadmissible ages at death placed the date of death after the survey dates!

The International Rescue Committee survey data contained inadmissible ages at deaths, so the Kaplan Meier nonparametric maximum likelihood estimator was, used along with estimators from current status data only.

Tips:

  • Maximum likelihood and least squares estimators differ. We estimated survivor functions from baseline and endline surveys. “MLE” and “LSE” denote maximum likelihood estimation and least squares estimates. They don’t always agree, because the methods are different approaches to estimation. In particular, LSE does not respond to noise. If noise is not uniform across the sample, LSE might be incorrect. The MLE takes noise into consideration. The MLE estimates in the figure are from current status data. They agreed pretty well with the Kaplan-Meier estimators from admissible ages at deaths.

Lessons learned:

  • Survey data is not always what is expected. Surveys should have cross-checking validation opportunities. Current status data provided the opportunity to make two estimates of survivor functions.
  • Expect unexpected outcomes. The baseline U5 estimates are over 10%, and the endline U5 estimate is approximately 4%. Pakistan’s country U5 is 8.7%. The endline U5 estimates standard deviation is less than 0.5%. The apparent reduction in U5 appears to be primarily a reduction in deaths after infant mortality in the first year. Infant mortality was almost 4% before and after.


Resources:

The American Evaluation Association is celebrating Statistics Without Borders Week. The contributions all week come from SWB members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· · ·

Greetings I am Mark Griffin. At the time of writing this article I am fortunate enough to be in the middle of a world trip. Key events of my trip: last week I was in Fiji chairing the Pacific Conference for Statistics and Information Systems, my third trip to Fiji, with a rapidly developing workshop program that I have initiated. This week I am in Adelaide, Australia presenting lessons learnt in Fiji at the Australian Statistical Conference. Last night I held the first event for our societies’ section I founded earlier this year, Section for International Engagement. Tomorrow, I fly to North Korea to present Pyongyang University of Science and Technology and Statistics Without Borders co-organised event.

Working with friends and colleagues in developing nations is a true passion of mine. I have also set up an Australian NGO to deliver further training and consulting.

So what advice would I give to like-minded colleagues who have a similar passion?

Tips:

  • Find a mentor (or several). Working in developing countries is incredibly rewarding, but can also be incredibly demanding. Line up people who can support you through the emotional challenges involved, bounce ideas back and forwards, and celebrate with as you enjoy the fruits of your labour.
  • Make strong partnerships. The concept of partnership is a matter of humility, patience, and acceptance. As an outsider you might have superior academic knowledge, and yet your colleagues will best know what’s happening within their country, the needs and constraints, and will generally be the people who have made the largest personal commitment. Strong partnership requires constant communication back and forth about expectations, underlying motivation, and mutual appreciation.
  • Long-term sustainability is difficult. Many a kind-hearted person has gone in for a short duration and set up some potentially beneficial services (such as housing or healthcare facilities), and then quickly left again only for those services to fall into disuse. Any overseas colleague needs to think about the long-term benefits that collaboration will produce (and whether the benefits that you have in mind match the vision of the local people).
  • Communication, communication, communication. As a person who has recently gotten married I am constantly re-discovering the importance of improving all channels of communication. Constant communication is perhaps even more vital with colleagues living and working in completely different contexts. There are too many promising projects that have succeeded or failed, primarily due to the quality of the communication between the stakeholders.
  • Personal motivation is crucial. Make sure that a project is one that you personally are motivated about. At the end of the day, projects have joys and challenges, and to remain committed requires that you have personal motivation for the project to succeed.

Resources:

The American Evaluation Association is celebrating Statistics Without Borders Week. The contributions all week come from SWB members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· · · ·

Greetings I am Mary Gray from the American University in Washington, D.C. and a member of Statistics Without Borders. Recently, I was involved in surveying Rwandan prisoners.

Lessons Learned:

  • Sample the appropriate stakeholders. Two years after the genocide that killed 800,000 Rwandans, primarily Tutsis, there were 80,00-90,000 imprisoned in a country of a few million and the prison population continued to grow by as many as 10,000 per month, the only release being death.  In spite of international horror over the brutal loss of life, international notions of justice demanded due process and some semblance of a speedy trial for the accused.  The post-genocide Rwandan government rightly claimed that the fragile judicial system, deprived of most of its personnel and much of its infrastructure, could not handle the prospective case load.  Donor governments who had already constructed several large new prisons asserted that however horrible the crimes of which they were accused it was not acceptable to put suspects in prison and throw away the key.  Why not, proposed representatives of the US and other nations, with the agreement of the Rwandan government, begin by selecting a sample of prisoners to bring to trial?
  • With large populations, stratify the sample. Because conditions under which large numbers of suspects were arrested and imprisoned  in different regions of the country, a stratified sample from four regions and the capital Kigali was used.
  • Prepare for the worst, records may be inadequate. By the time of the survey the prison conditions were generally adequate but records were difficult to acquire.  There were generally lists or card files that could be used for systematic sampling, but the information was generally limited.  Usually the crime was listed only as “genocide” without the names of victims or of arresting officers and no reference to the time and place of the offense.
  • Prepare for the worst, data may be missing. By the time of the survey little could be done about the missing data so expectations of what information could be gathered had to be revised.
  • Educate your stakeholders when possible. The lessons learned are the usual ones about educating those involved about the importance of considering what data needs to be collected. An unfortunate outcome was that those authorizing the survey did not understand that a random sample might not include those whom they were most eager to bring to trial.

Resource:

The American Evaluation Association is celebrating Statistics Without Borders Week. The contributions all week come from SWB members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

·

Greetings, I am Gary Shapiro, co-founder and current Chair of Statistics Without Borders (SWB). Recently, we started collaborating with the American Evaluation Association. This week aea365 will feature knowledge and resources from SWB members appropriate for professional evaluators.

Tips:

  • What is SWB? SWB is an  all-volunteer outreach group of the American Statistical Association. The group was formed in 2008 to provide pro bono statistical support to organizations, particularly from developing countries. Although the group started out small, it has quickly grown to involve over 500 volunteer statisticians around the world who provide pro bono consultancy activities.
  • Who can volunteer? SWB warmly welcomes volunteers from a wide range of backgrounds. We have members who are highly experienced statisticians, members who are new to the field of statistics but would like to work under the supervision of an experienced statistician, and members from other non-statistical disciplines (including the evaluation of community aid programs and data management).
  • How does one volunteer? Volunteering is simple, sign up online.
  • SWB projects – no job is too large or small. SWB has a broad scope of projects. SWB’s projects are the core of our mission.  Through these projects we help international health workers and others in resource-limited settings who do not have statistical training by partnering them with professional and student statisticians.  Some examples of projects include the design and analysis of epidemiological studies, the review of grant proposals for funding agencies in international health (health considered very broadly), and on-site training for current health projects or for the development of local staff. The scope of our work is diverse, ranging from survey design to analysis to efforts to provide statistical software for developing nations.

Resources:

  • Get help with a project. Do you know a group with limited resources? SWB is always looking to expand our list of collaborators and projects. If you are working in absolutely any field and would benefit from working with an expert group of statisticians, then we would love to hear from you.  If you have any ideas for projects that fall within the general scope of statistics we would love to talk further with you. The SWB webpage provides the means to request assistance.
  • SWB on Facebook
  • SWB on Twitter
  • SWB on LinkedIn

The American Evaluation Association is celebrating Statistics Without Borders Week. The contributions all week come from SWB members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· ·

Older posts >>

Archives

To top