AEA365 | A Tip-a-Day by and for Evaluators

TAG | Data

Hi, I am Jim Van Haneghan, Professor of Professional Studies at the University of South Alabama.

I am writing today about Nate Silver’s book The Signal and the Noise: Why Most Predictions Fail-But Some Don’t. It is an intriguing book about statistical prediction and “big data.” For those who have not read the book, it covers variety of topics including: economic forecasting, sports prediction, gambling, political polling, epidemics, terrorist attacks, and climate change.

Lessons Learned:

1. The book provided me with both reminders of important habits of mind for evaluators and some new ways to think about data. For example, early on Silver talks about being “out of sample.” The idea is that the data we collect may not be the right data in the context we are addressing. As evaluators, we have to ask the question whether the logic model we are following leads us data appropriate to the evaluation at hand. While this seems obvious, many times we go into evaluation contexts with one expectation only to find those expectations changed making the model we develop inaccurate. For example, I am currently rethinking my approach to school improvement evaluations because of changes in how schools are now evaluated in our state.

2. Another highlight was Silver’s descriptions of Tetlock’s ideas about experts who are foxes versus hedgehogs. Hedgehogs thrive on a single big idea and limited data. Consequently, they are often horrible prognosticators (political pundits on TV for example). Foxes, on the other hand, are more self-critical, look at data from a variety of perspectives, examine many sources, and draw more modest conclusions. I like to believe that evaluators act like foxes examining a variety of data to make more informed decisions. Sometimes clients desire us to act like hedgehogs making bold predictions based on limited information. It is important to stay “fox like” in such situations.

3. Another valuable discussion is Silver’s consideration of Bayesian probability to improve prediction. Paying attention to prior probabilities, adjusting probabilities of outcomes based on new information, and focusing on conditional probabilities of events are discussed. In some respects, I believe many evaluators are intuitive Bayesians. Attempts to use Bayesian analysis in evaluation are not new, but the book has led me to think about new ways to integrate this approach.

4. Another important lesson concerns the noise in our data. This is especially true in education where the measures are psychometrically noisy and sometimes not plentiful enough to distinguish the signal from the noise.

5. Finally, Silver reminds us that the advent of “big data” does not change the need to attach meaning to data. The availability of more data does not relieve us of the need for rigorous interpretation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I’m Marco Muñoz, Evaluation Specialist at Jefferson County Public Schools (Louisville, KY) and Past-President of the Consortium for Research on Educational Assessment and Teaching Effectiveness (CREATE). Today, I am writing about evaluations within a large urban school system.

Lessons Learned: In a recent presentation at CREATE, we discussed how heuristic practices help when it comes to evaluation and research practices in a large urban district (see this article). Using case study methodology, we examined accountability, planning, evaluation, testing, and research functions of a research department in a large urban school system. The mission, structural organization, and processes of research and evaluation are discussed in light of current demands in the educational arena. The case study shows how the research department receives requests for data, research, and evaluation from inside and outside of the educational system, fulfilling its mission to serve the informational needs of different stakeholders (local, state, federal).

Four themes related to a school district research department are discussed: (1) basic contextualization, (2) deliverables of work, (3) structures and processes, and (4) concluding reflections about implications for policy, theory, and practice. Topics include the need for having an evaluation model and the importance of having professional standards that guarantees the trustworthiness of data, research, and evaluation information. The multiple roles and functions associated with supplying data for educational decision making is highlighted

Hot Tip: We need to have a framework as well as clear guidelines. Without a doubt, The Program Evaluation Standards is an outstanding source to guide your evaluation work in school systems. In addition, we have to know the difference between research and evaluation and one of the best resources continues to be the now classic book by Fitzpatrick, Sanders, and Worthen entitled Program Evaluation: Alternative Approaches and Practical Guidelines. I would also highly recommend the use of the Encyclopedia of Evaluation edited by Sandra Mathison, since it will help you with quite a bit of topics.

Rad Resource: Daniel Stufflebeam developed a Program Evaluation Checklist. It may be downloaded from the Evaluation Center at Western Michigan University along with a number of other evaluation-oriented checklists.

Clipped from http://www.wmich.edu/evalctr/checklists/

If you have any ideas or resources to share regarding evaluations within a large urban school system, please add them to the comments for this post.

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Greetings! I’m Nichole Stewart, a doctoral student in UMBC’s Public Policy program in the evaluation and analytical methods track. I currently work as an analyst, data manager, and evaluator across a few different sites including Baltimore Integration Partnership, Baltimore Workforce Funders Collaborative, and Carson Research Consulting Inc.

Lessons Learned: The Growing Role of Data Science for the “Little” Data in Program Evaluation. Evaluators are increasingly engaged in data science along every step of the evaluation cycle. Collecting participant-level data and developing indicators to measure program outputs and outcomes is now only a small part of the puzzle. Evaluators are working with more complex data sources (administrative data), navigating and querying data management systems (ETO), exploring advanced analytic methods (propensity score matching), and using technology to visualize evaluation findings (R, Tableau).

Evaluators Also Use Big Data.  Large secondary datasets are appropriate in needs assessments and for measuring population-level outcomes. Community-level data, or data available for small levels of geography, provide context and can be used to derive neighborhood indicators. Evaluators must be able to not only access and manipulate this and other kinds of Big Data but to ultimately learn to use data science to maximize the value of the data.

Rad Resource: The American Community Survey (ACS)  is an especially rich, although recently controversial, Big Data resource for evaluators. The survey offers a wide range of data elements for areas as small as the census block and as specific as the percent of carpoolers working in service occupations in a census tract.

Hot Tips:

Rad Resource: The Census Bureau’s OnTheMap application is an interactive web-based tool that provides counts of jobs and workers and information about commuting patterns that I explored in an AEA Coffee Break webinar.

Lessons Learned: Data Science is Storytelling: Below is a map of unemployment rates by census tract from the ACS for Baltimore City and surrounding counties.  This unemployment data is overlaid with data extracted from OntheMap depicting job density and the top 25 work destinations for Baltimore City residents.  The map shows that 1) there are high concentrations of unemployed residents in inner-city Baltimore compared to other areas, 2) jobs in the region are concentrated in Downtown Baltimore and along public transportation lines and the beltway, and 3) many Baltimore City workers commute to areas in the surrounding counties for work.  Alone, these two datasets are robust but their power lies in visualizing data and interpreting relevant intersections between them.

Stewart map

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

Hi! I’m Silvana Bialosiewicz, an advanced doctoral student at Claremont Graduate University (CGU) and Senior Research Associate at the Claremont Evaluation Center. My goal as an applied researcher is to help develop and disseminate “best-practices” for high-quality evaluation of programs that serve children. Today I’d like to share some strategies for collecting valid and reliable data from young children.

Research on youth-program evaluation and child development reveal that:

  • Children less than nine years old possess limited abilities to accurately self-report, especially by way of written surveys
  • Previously validated measures are not always appropriate for diverse samples of children

Therefore, a critical step in the process of designing evaluations of youth programs is the development and/or choosing of measures that are sensitive to children’s language skills, reading and writing abilities, and life experiences.

Hot Tip: Consider using alternatives to written surveys, such as interviews, when collecting data from children less than nine years old. If written surveys are used, be cognizant of young children’s inability to understand complex questions and accurately recall past experiences. Surveys for young children should be orally administered, use simple language, and use response options that children can easily understand.

Hot Tip: Do not assume that a measure, which has been demonstrated to be valid in a previous study, is appropriate for your participants, especially when the program serves a diverse population of children. The majority of psychological measures for children have been developed and normed on samples of high SES Caucasian children and cannot be assumed to be valid and reliably for diverse samples of children (i.e. English Language Learners, ethnic and cultural minorities, children with physical or sensory disabilities).

Hot Tip: Pilot test your measures, even previously validated measures, before launching full scale data collection to ensure developmental and contextual appropriateness.

Rad Resources: Researching with Children & Young People by Tisdall, Davis, & Gallagher and Through the Eyes of the Child: Obtaining Self-Reports from Children by La Greca are two great books for anyone looking to expand their knowledge on this topic.

Other AEA365 posts on this topic:

Susan Menkes on Constructing Developmentally Sensitive Questions 

Tiffany Berry on Using Developmental Psychology to Promote the Whole Child in Educational Evaluations

Krista Collins and Chad Green on Designing Evaluations with the Whole Child in Mind

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PK12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

· ·

Hello! We are Johanna Morariu, Kat Athanasiades, and Ann Emery from Innovation Network. For 20 years, Innovation Network has helped nonprofits and foundations evaluate and learn from their work.

In 2010, Innovation Network set out to answer a question that was previously unaddressed in the evaluation field—what is the state of nonprofit evaluation practice and capacity?—and initiated the first iteration of the State of Evaluation project. In 2012 we launched the second installment of the State of Evaluation project. A total of 546 representatives of 501(c)3 nonprofit organizations nationwide responded to our 2012 survey.

Lessons Learned–So what’s the state of evaluation among nonprofits? Here are the top ten highlights from our research:

1. 90% of nonprofits evaluated some part of their work in the past year. However, only 28% of nonprofits exhibit what we feel are promising capacities and behaviors to meaningfully engage in evaluation.

2. The use of qualitative practices (e.g. case studies, focus groups, and interviews—used by fewer than 50% of organizations) has increased, though quantitative practices (e.g. compiling statistics, feedback forms, and internal tracking forms—used by more than 50% of organizations) still reign supreme.

3. 18% of nonprofits had a full-time employee dedicated to evaluation.

Morariu graphic 1

4. Organizations were positive about working with external evaluators: 69% rated the experience as excellent or good.

5. 100% of organizations that engaged in evaluation used their findings.

Morariu graphic 2

6. Large and small organizations faced different barriers to evaluation: 28% of large organizations named “funders asking you to report on the wrong data” as a barrier, compared to 12% overall.

7. 82% of nonprofits believe that discussing evaluation results with funders is useful.

8. 10% of nonprofits felt that you don’t need evaluation to know that your organization’s approach is working.

9. Evaluation is a low priority among nonprofits: it was ranked second to last in a list of 10 priorities, only coming ahead of research.

10. Among both funders and nonprofits, the primary audience of evaluation results is internal: for nonprofits, it is the CEO/ED/management, and for funders, it is the Board of Directors.

Rad Resource—The State of Evaluation 2010 and 2012 reports are available online at for your reading pleasure.

Rad Resource—What are evaluators saying about the State of Evaluation 2012 data? Look no further! You can see examples here by Matt Forti and Tom Kelly.

Rad Resource—Measuring evaluation in the social sector: Check out the Center for Effective Philanthropy’s 2012 Room for Improvement and New Philanthropy Capital’s 2012 Making an Impact.

Hot Tip—Want to discuss the State of Evaluation? Leave a comment below, or tweet us (@InnoNet_Eval) using #SOE2012!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · ·

I am Tarek Azzam, assistant professor at Claremont Graduate University and associate director of the Claremont Evaluation Center.

Today I want to talk about crowdsourcing and how it can potentially be used in evaluation practice. Generally speaking, crowdsourcing is the process of using the power of the many individuals (i.e. the crowd) to accomplish specific tasks. This idea has been around for a long time (e.g. the creation of the oxford dictionary), but due to recent developments in technology, the ability to access the power of the crowd has become much easier.

I will focus on just one crowdsourcing website because Amazon’s Mechanical Turk is the most widely known, used, and studied crowdsourcing site. This site helps to facilitate the interactions between “requesters” and “workers” (see figures below). A requester can describe a task (e.g. please complete a survey), set the payment and allotted time for completing a task, and determine the qualifications needed to finish the task. This information is then posted on MTurk website, and interested individuals who qualify can complete the task for the promised payment.

Clipped from https://www.mturk.com/mturk/welcome

This facilitated marketplace has some really interesting implications for evaluation practice. For example, evaluators can use MTurk to establish the validity and reliability of survey instruments before giving them to intended participants. By posting a survey on MTurk and collecting responses from individuals with similar background characteristics as your intended participants, an evaluator can establish the reliability of a measure, get feedback on the items, and if needed translate the items into another language. All this can be accomplished in a matter of days. For me personally I’ve been able to collect 500 responses for a 15 minute survey, at a cost of 55 cents per survey in less than three days.

Hot Tip: when selecting the eligibility criteria for MTurk participants choose those with 95% or higher approval ratings.

There are other uses that I am currently experimenting with. For example:

  • Can MTurk respondents be used to create a matched comparison group in evaluation studies?
  • Is it possible to use MTurk respondents in a matched group pre-post design?
  • Is it possible to use MTurk to help with the analysis and coding of qualitative data?

These are things that are yet to be known but I will keep you updated as we progress in exploring the limits of crowdsourcing in evaluation practice.

Clipped from https://requester.mturk.com/create/projects/new

Hot Tip: I will be presenting a Coffee Break Demonstration (free for American Evaluation Association (AEA) members) on Crowdsourcing on Thursday April 18, 2013 from 2:00-2:20pm EDT. Hope to see you there.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· · ·

Hello all, I’m Kim Firth Leonard, American Evaluation Association (AEA) member since 2008, and President of local affiliate OPEN, the Oregon Program Evaluators Network. I currently work at Marylhurst University in Portland, Oregon, primarily on assessment of student learning and academic programs, though I also dabble in institutional research. I also do contract work in program evaluation via Leonard Research and Evaluation LLC.

Rad Resource – actionable data: I started the blog actionable data in 2011 and post somewhat regularly (a few times per month whenever possible) with a handful of friends and co-authors. The blog advocates for the collection of meaningful and useful data, and for wise use of that data. Our posts span a range of topics often related to program evaluation, though most focus more specifically on data and data use.

Hot Tips – favorite posts: Here are a few mostly recent, favorite and/or most visited posts authored by yours truly so far:

Clipped from http://actionabledata.wordpress.com/

Lessons Learned – why I blog: For me, blogging is an opportunity to question, explore, and learn as well as to share what I know. To think together with my co-authors and anyone willing to read (and comment) along with us! A ‘manifesto’ for actionable data is here.

I also ‘micro blog’ on Twitter (@KimFLeonard), which has been a wonderful way to engage others with my blog and to find people who are doing interesting work. Between the blog and Twitter, I have discovered many wonderful resources and connected to other great evaluators (including Sheila B. Robinson, who is graciously co-authoring a series of posts with me).

Lessons Learned – what I’ve learned: How liberating and enlightening it can be to throw an idea online. Or to ponder something ‘out loud.’ And that blogging, especially when accompanied by conversation via social media, can be an amazing networking and learning tool.

This winter, we’re continuing our series highlighting evaluators who blog. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hi, I’m Stacy Johnson, Senior Research Analyst at the Improve Group a national and international evaluation consulting organization based in Saint Paul, MN. The Improve Group works with organizations to make the most of information, navigate complexity and ensure their investments of time and money lead to meaningful, sustained impact. This past October at Evaluation 2012, I had the opportunity to present on my experiences analyzing longitudinal data with less than perfect data.

Being thoughtful in the planning and data collection process plays a crucial role in successfully collecting the data you need to address your evaluation questions. But, what happens if you are not involved in this process? The first step is to assess what you have to work with and figure out how to move forward. Sometimes you may be pleasantly surprised at what a great job was done in gathering data over time. Other times you need to figure out how to make the best out of what you have and focus on the control you have over what happens from this point forward. This aea365 includes some of the lessons I have learned and tips that helped me along the way.

Lesson Learned: Changes are often made to data collection tools over time including changes to wording of items, changes to response options, and eliminating and adding items.

Hot Tips:

  • Recommend selecting key variables to track and keep consistent
  • Explain implications of making changes
  • Facilitate thinking ahead
  • Create a database to use as a guide for what should be tracked

Lesson Learned: The process of collecting data can be unclear with data collected by a variety of people.

Hot Tips:

  • Create a formal process with detailed instructions and protocols
  • Facilitate clear communication and training of data collectors

Lesson Learned: Messy datasets! Data is collected in different formats, is poorly cleaned, incorrectly merged, there are no clear data cleaning decisions, and there are unknown variable names, labels, and coding.

Hot Tips:

  • Request original data
  • Talk to those involved about the decisions that were made
  • Get copies of instruments

Lesson Learned: There is a need to manage unrealistic expectations when you are asked questions the data cannot answer.

Hot Tips:

  • Discuss what can be shown (and what cannot) with the data collected
  • Balance expectations with reality – you may need to guide others on how feasible their requests are and the limitations of the data
  • Facilitate thinking ahead (again) – help others think about their future evaluation needs and how their work may evolve over time

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Greetings. We are Bill Shennum and Kate LaVelle, staff members in the Research Department at Five Acres, a nonprofit child and family services agency located in Altadena, CA and serving the greater Los Angeles area. We work as internal evaluators to support outcome measurement and continuous quality improvement within the organization.

In our roles as internal evaluators we work with agency staff to develop data collection for key processes and outcomes, and assist staff in developing program improvement goals/activities. The quantitative and qualitative data included in our internal evaluation reports also supports other administrative functions including grant-writing, accreditation and program development.

Lessons Learned: In the course of this work we find it useful to incorporate data from our two primary funders, the Los Angeles County Departments of Children and Family Services (DCFS) and Mental Health (DMH). We use these data for a variety of purposes, such as to compare our agency’s outcomes to other service providers in LA County, establish benchmarks for child and program outcomes, and provide information on trends in the child welfare field to inform program development. Both DCFS and DMH make extensive statistical information available to the public on their websites.

Rad Resources:

1.       Los Angeles County DCFS (http://dcfs.co.la.ca.us/) provides clickable fact sheets on their “About Us” tab, covering everything from demographics and maltreatment statistics to placement trends and foster care resources. The site has many other reports including Wraparound performance summaries and individual group home compliance reports.

2.       Los Angeles County DMH (http://psbqi.dmh.lacounty.gov/) also makes statistical information of interest to evaluators available through its Program Support Bureau. The “Data Reports and Maps” link accesses countywide and area–specific demographic and performance data for child and adult mental health, including geographic information system mapping of mental health resources.

Southern California evaluators who work in child welfare and/or mental health will find much information of interest on the above sites. More outcomes and reports are added every year, so check back often.

 

Hot Tip: For those of you visiting Anaheim for the 2011 American Evaluation Association conference and interested in going to the beach, check out the surf at Huntington Beach pier in nearby Huntington Beach, about 10 miles from the headquarters hotel for the conference. This is centerpiece of Southern California’s original “Surf City.” It is a perfect place to take a break from the conference and check out the local beach scene.

The American Evaluation Association is celebrating this week with our colleagues at the Southern California Evaluation Association (SCEA), an AEA affiliate. The contributions all this week to aea365 come from SCEA members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Cheers aea365 readers! My name is Alex da Silva from Brazil. While I may be able to undertake analyses in SAS, I find myself turning more frequently to Microsoft Excel because so many people can then use it. If I set up something in Excel I can be more confident that my collaborators are going to be able to review my work and the program staff are going to be able to carry out further analyses. I find that Excel is a key tool for empowerment in that it is the one analysis tool that most laypeople have ready access to as well as usually a basic understanding of how to use. Here are three writers that I turn to expand my knowledge and skills around this basic tool.

Beginner Rad Resource – ExcelCharts.com: Written by Jorge Camoes from Lisbon, Portugal, this blog takes a unique approach in that it focuses on bigger issues of data visualization and good design, but happens to use excel to deliver charts and dashboards. Building on the work of Edward Tufte, the concepts explored are accessible to anyone, but the implementation can call for a good understanding of Excel (or an alternate program – he stresses concepts over how to).

Recent Example Posts:

  • Don’t Make me Think (About Your Charts!)!
  • Anatomy of a Bad Chart
  • The Healing Power of Statistics

Intermediate Rad Resource – Chandoo.org: This site promises to help you “become awesome in excel” and is run by Purna Duggirala from his home in India. Duggirala can be a bit heavy on the selling (he has training and ebooks available), but has great guidance written in accessible language. Most of his tips do not require coding, and for those that do, he provides a detailed walk through of what needs to be done.

Recent Example Posts:

  • Dummy Data – How to use the Random Functions
  • Updating Report Filters using simple macro – a Dynamic Pivot Chart Example
  • What are Pivot Table Report Filters and How to use them?

Advanced Rad Resource – Daily Dose of Excel: Well, it actually comes out about every other day, but this blog has multiple excel gurus who contribute regular tips that really push me to make the most of Excel. This blog is not for the faint of heart – although some of the tips are at a beginner’s level, most involve coding and assume you are comfortable coding at least in VBA.

Recent Example Posts:

  • Read INI File in VBA
  • Binary to Decimal Conversion
  • Copying HTML Tables over Merged Cells

Hot Tip: Be sure to check out the archives!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Older posts >>

Archives

To top