AEA365 | A Tip-a-Day by and for Evaluators

Hello! I am Karen Widmer, the Program Design TIG co-chair and a doctoral student at Claremont Graduate University. In 2014 the PD-TIG was officially approved for business and we have a full docket of presenters lined up for the 2015 AEA conference. We continue to be surprised at the many roles evaluators take at the design stage of a program and we’d like to share some of the themes that will be featured in Chicago.

Lesson Learned:

  • In the design stage of the program…
    • Evaluation questions can serve as a source of brainstorming. They trigger new ways to look at program aims.
    • When measures are laid out from the beginning, data collection can be more easily integrated into the daily work of the program.
    • More careful identification of participant characteristics at the outset can jumpstart your efforts to locate an appropriate comparison group.
    • A logic model is often welcomed by program stakeholders. Graphic depiction of the logical relationships between program elements gets everyone on the same page and equips them to anticipate strengths, weaknesses, gaps, and unintended consequences of program activities.
  • Being an evaluator at the design stage can be a mixed bag. Designers may reckon that the time for evaluation should be farther off and see your questions as push-back to their vision. Outside evaluators may then criticize your later participation in evaluating the program, claiming that you no longer have an unbiased perspective. These concerns are legitimate and we look forward to discussing them at the conference. 

Hot Tip:

  • A broad range of programs are suited for a priori Programs undergoing development will need ongoing formative evaluation. Programs that deliver a product will need summative criteria for judging their value. For programs joining a consortium (where several programs share common purpose and maybe funding), evaluative thinking can assist with the protocol for effective reporting across consortium members.

Cool Trick:

  • Evaluation can be developed as part of the program to ensure:
    • Maximum utility—intended users can tell you in advance which information will be useful.
    • Maximum feasibility—available resources can be budgeted in advance.
    • Maximum propriety—the welfare of those affected can be given priority when decisions must be made about the program.
    • Maximum accuracy—pre-planning evaluation methods increases their dependability, helping to narrow alternate explanations.

By participating in program design activities, evaluators have the luxury of strengthening a program before it is launched. This is avant garde work, so our TIG has its work cut out to develop the skills and protocol for doing the job well.

Rad Resource:

  • Join us at the PD-TIG sessions at the 2015 AEA Conference!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hello! I am Michelle Kosmicki, Research Manager for NET Nebraska. I engage in many different types of media research, including broadcast media, digital media, social media, and web analytics. While most of the data I curate and analyze is used for monitoring performance and planning, nearly all of my grant funded reporting requires some form of media impact evaluation.

Media impact evaluation has been a hot topic over the past few years. It’s been discussed and explored at the AEA annual conference. The burning question remains: How on earth do you measure media impact at the local level?

While measuring media impact doesn’t require magical things like unicorns, it really does help to have a full understanding the nature of media data. This can be difficult for evaluators who were trained in the cause-effect quasi-experimental method. It was quite difficult for me at first too.

Lesson Learned: Get comfortable with the fact you have no control.  That’s correct. In most cases your media impact data will have been collected via interactions with self-selected participants. This is a different type of research than the typical recruited market research panel. So you will have very little control over who the participants are. Even if you are using data from a proprietary source such as Nielsen or Rentrak, you still have no control of their panel of participants, data imputation, and analysis of the data before it arrives in your office.

Lesson Learned: Get comfortable with “squishy” data.  Social media data seems straight forward. Someone clicks on a link in your tweet and you can see the number of link clicks. The question is, can you tell how many link clicks resulted in views of the linked digital media: a page view, story read, or video viewed?

Hot Tip: Learn how to use campaign tracking with URL tags.  Most web analytics can handle some form of URL tagging. It is the easiest way to track clicks on links on social media, in e-newsletters, blogs, and even on other websites. If you use Google Analytics, you can find directions here.

Lesson Learned: Look at the big media picture.  Bringing all your media data together seems like a strange thing to do. In reality, it is no different than using a mixed methods approach. Analyze the data separately and together. Look for patterns. Visualize it. The results may not be straight forward.

Lesson Learned: Assume nothing.  Media data is inherently full of bias. Always be aware of your own bias as you analyze and report on media data. Recognize the limits of your data and analysis.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi, my name is Cheryl Keeton. Throughout my career, I’ve been responsible for program evaluation, review, and success. Most recently I transitioned to independent consulting to focus my energy and passion to the field of evaluation. I want to share my experience as one way to make the transition.

Lessons Learned: Three years before I decided to become an independent evaluator, I began exploring evaluation from the 50,000 foot view. I attended my first AEA Conference to learn about the many ways evaluation is used outside of my field. I wanted to know who is doing evaluation, how are the various approaches different from the way I do things, and how can I use the sessions to help self-evaluate my strengths and weaknesses. The sessions were fascinating and the community of AEA members was very friendly and helpful. I made new friends and began to establish a network of support.

Next I attended an AEA Summer Institute for in-depth learning and practice. I knew I had a firm foundation but the summer study program allowed me to build and grow, extending my understanding, and learning techniques that were new to me.

Since those initial steps, I reached out to resources around me to help establish my independent consulting. Gail Barrington gave me the best advice for how to begin when I met her at an AEA conference “do it now while you are still working.” Before making the transition, I read Dr. Barrington’s book– Consulting Start-Up and Management: A Guide for Evaluators and Applied Researchers. I got advice from the career center at the local community college and created a web presence. Dr. Barrington’s book has been the best investment and reference for me as the process unfolds.

I reached out to the evaluation community through AEA and my regional organization, volunteering on the local and national level and taking advantage of training such as Ann K. Emery’s Data Visualization workshop. Her blog and resources are amazing. I also follow Sheila Robinson, AEA365 Tip-a-Day by and for Evaluators, and advice on Potent Presentations, p2i.

I found that knowing what you are good at helps to provide direction as you begin. Fields of experience help me to narrow the scope so I know what projects to consider and where to place my energies for marketing. Gail Barrington outlines this in her book very well.

My experience transitioning from in-house evaluation to independent evaluation and consulting has confirmed for me that membership in AEA is essential to provide the big picture and grounding in principles, training is imperative to stay current, and connecting with others in the field is invaluable.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Jul/15

28

Awab on How to Gauge Learning of a Training

My name is Awab and I am working as Monitoring & Evaluation Specialist with the Higher Education Commission (HEC), Islamabad.

To gauge the learning of a training is always a challenge. Recently, we faced this challenge when the HEC conducted the training of about 1600 top managers of Pakistani universities. The trainings were conducted through some implementation partners (IPs). We asked the IPs to conduct pre and post-training tests so that we now how much the participants could learn from these trainings. The IPs conducted the pre & post-tests. They analyzed the data and told the difference between the scores in pre-tests and post-tests. Since the post-test scores are always greater than the pre-test scores (in some of our cases, more than 100%) , the analysis painted a rosy picture of the trainings and everything looked fine (as shown in figure 1).

Figure 1: Comparison of Pre & Post-tests, shared by one of the IPs.

Awab 1

As the training reports were passed on to the M&E Unit, we rejected the analysis, because it did not give us sufficient information to know the quality of training and plan for the future.

Hot Tips: We started with asking the right questions. We told the IPs that, from the pre & post-tests analyses, we were rather interested in knowing the answers to three questions: (i) what was the pre-existing learning level of the participants?; (ii) what is the net learning attributable to the training?; and (iii) what is the learning gap we need to bridge in future training?

Cool Tricks: The answers to the three question could be given by analyzing the pre & post test scores in a very simple manner and putting the data in a stacked bar chart. We developed a model for analysis and shared it with the IPs. The results were surprisingly interested. The model gave a clear picture of the pre-existing learning, net learning and the learning lag. Thus, we were able not only to appreciate the IPs for the net learning attributable to them but also hold them accountable for the learning gap and plan for the future training.

Figure 2: Learning-based Model of Pre & Post-tests analysis.

Awab 2

Lessons Learned:

In evaluations, it is always good to ask yourself how you are going to use the data. Asking the right questions is half the solution.

For further details on how to gauge learning in a training and downloading the Excel sheets for data analysis on the given model, please click on the following links:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

 

Hi, I’m Elissa Schloesser founder and principal graphic designer at Visual Voice. I specialize in helping organizations visually communicate complex information, concepts and ideas—including evaluation methods, theories and findings.

Rad Resource: Data Stories is a podcast that covers topics on data visualization. I highly recommend it to anyone interested in the field. A recent episode, titled Disinformation Visualization explores the “darker side” of data visualization.

I found this discussion particularly thought provoking and relevant to anyone communicating data. It challenges you to think critically about the data visualizations you create and consume.

Hot Tip: Can you spot a misinformed chart? Below is an example of two charts created from the same dataset.

Schloesser

This example is a little extreme, but I included it to show how data could be manipulated in visualizations.

Both are technically correct, but they strive to tell a different story based on how the data is represented.

Hot Tip: Spot misinformed data visualizations by considering these three things:

  • CONTENT: How was the data was gathered?
    • Before the data is even visualized consider how it was collected.
  • STRUCTURE: How was the data structured or sampled?
    • Does the visualization only represent certain years or a particular age group?
  • PRESENTATION: How was the data presented?
    • Does iconography, colors, annotations, etc. used influence your perception of the data?

Lessons Learned: Think of data visualizations as “visual arguments” rather than “visual evidence”.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello, Talbot Bielefedlt here!  I’m with Clearwater Program Evaluation, based in Eugene, Oregon. I have been doing educational program evaluation since 1995. My clients include all levels of education, from Kindergarten to graduate school, with an emphasis on STEM content and educational technology.

When I started out as an evaluator, I knew I was never going to do assessment. That was a different specialty, with its own steep learning curve. Furthermore, I worked with diverse clients in fields where I could not even understand the language, much less its meaning. I could only take results of measures that clients provided and plug them into my logic model. I was so young.

Today I accept that I have to deal with assessment, even though my original reservations still apply. Here is my advice to other reluctant testers.

Hot Tip: Get the program to tell you what matters. They may not know. The program may have been funded to implement a new learning technology because of the technology, not because of particular outcomes. Stay strong. Insist on the obvious questions: (“Demonstrably improved outcomes? What outcomes? What demonstrations?”) Invoke the logic model if you have to (“Why would the input of a two-hour workshop lead to an outcome like changing practices that have been in place for 20 years?”) Most of all, make clear that what the program believes in is what matters.

Get the program to specify the evidence. I can easily convince a science teacher that my STEM problem-solving stops around the level of changing a light bulb. It is harder to get the instructor to articulate observable positive events that indicate advanced problem solving in students. Put the logic model away and ask the instructor to tell you a story about success. Once you have that story, earn your money by helping the program align their vision of success with political realities and the constraints of measurement.

Lesson Learned: Bite the intellectual bullet and learn the basics of item development and analysis. Or be prepared to hire consultants of your own. Or both. Programs get funded for doing new things. New things are unlikely to have off-the-shelf assessments and psychometric norms.

Lesson Learned: Finally, stay in touch with evaluation communities that are dealing with similar programs. If you are lucky, some other reluctant testers will have solved some of your problems for you. Keep in mind that the fair price of luck in this arena is to make contributions of your own.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello evaluators! I’m Sheila B Robinson, aea365’s Lead Curator and sometimes Saturday contributor.

In 2010, I traveled to San Antonio, TX to attend Evaluation 2010. I was on my own, and knew no one else attending the conference. I made my first friend there by recognizing the name tag of the author of an early aea365 post I admired, and introducing myself as we both waited to attend a session. This blog has been a tremendous connector for since its inception.

Networking with other evaluators has been a highlight of my career and I continue to actively seek out opportunities to meet new people, whether it is in person at a conference, or online. I’ve made some wonderful new friends this way, and have often reached out to members of the evaluation community for help and advice when evaluation work gets tricky.

Fortunately, many evaluators are more and more connected via social media and using it to collect and share information and resources.

Lesson Learned: I was reluctant to use social media for professional purposes until just a few years ago, but now I find I very much enjoy the learning and interaction from these channels, among others:

twitter iconTwitter: It’s not just what celebrities ate for breakfast! AEA maintains a list of evaluators and evaluation organizations on Twitter here. I’ve also been “collecting” evaluators on Twitter; see my list of nearly 400 here.

facebook iconFacebook: It’s not just for Internet memes and selfies! Some AEA TIGs, affiliates, and evaluation associations have active Facebook pages (as do many independent evaluators and consultancies). A few of which I’m aware include:

pinterest iconPinterest: It’s not just for crafts and recipes! Kylie Hutchinson may be the most active evaluator I know who maintains 9 evaluation-related boards. Ann K. Emery also maintains many boards related to data visualization.

google plus iconGoogle+: It’s not just for “techy” people! It’s also a place where evaluators share ideas. David Fetterman is one evaluator particularly active in this space. Stephanie Evergreen can also be found here.

youtube iconYouTube: It’s not just for silly cat videos! AEA has its own channel here, and AEA affiliate Eastern Evaluation Research Society (EERS) now has its own YouTube channel.

AEA maintains a list of member blogs and Twitter handles here. If you have a blog or Twitter handle that doesn’t appear on the list, write to info@eval.org and ask to have it added!

Do you use these social media channels professionally? Add your info to the comments so we can all connect!

(Social Media icons by Aha-Soft Team via iconfinder.com)

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Sophia Guevara, Program Co-Chair for the Social Network Analysis (SNA) TIG.  This week, several evaluation professionals have shared with this blog’s readers their thoughts on social network analysis. With posts discussing logic models to examples of the application of social network analysis on a wide-range of evaluation questions, you’ve hopefully gained a better understanding of it.

Rad Resource: The SNA in Evaluation LinkedIn group. This group provides TIG group members with an opportunity to discuss topics of interest for those utilizing or learning about social network analysis.

Rad Resource: Join the SNA TIG group. As a member, make sure to make use of the eGroup discussion option.

Rad Resource: SNA TIG business meeting. If you are thinking of joining the TIG or have already joined and are looking to connect with other evaluation professionals making use of SNA, the business meeting is an excellent place to do just that. The SNA TIG business meeting is held at the annual American Evaluation Association conference.

Rad Resource: AEA public eLibrary and the Coffee Break Archive. There are a variety of resources that can help you learn more about the topic. For example, if you are looking to learn more about the use of SNA related-programs, check out Dr. Geletta’s coffee break webinar focused on importing spreadsheet data into Gephi.

The American Evaluation Association is celebrating Social Network Analysis Week with our colleagues in the Social Network Analysis Topical Interest Group. The contributions all this week to aea365 come from our SNA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Lopez KemisA

Hello from Andres Lazaro Lopez and Mari Kemis from the Research Institute for Studies in Education at Iowa State University. As STEM education becomes more of a national priority, state governments and education professionals are increasingly collaborating with nonprofits and businesses to implement statewide STEM initiatives. Supported by National Science Foundation funding, we have been tasked to conduct a process evaluation of the Iowa statewide STEM initiative in order to both assess Iowa’s initiative and create a logic model that will help inform other states on model STEM evaluation.

While social network analysis (SNA) has become commonly used to examine STEM challenges and strategies for advancement (particularly for women faculty, racial minorities, young girls, and STEM teacher turnover), to our knowledge we are the first to use SNA specifically to understand a statewide STEM initiative’s collaboration, growth, potential, and bias. Our evaluation focuses specifically on the states’ six regional STEM networks, their growth and density over the initiatives’ years (‘07-’15), and the professional affiliations of its collaborators. How we translated that into actionable decision points for key stakeholders is the focus of this blog.

Lessons Learned: With interest in both the boundaries of the statewide network and ego networks of key STEM players, we decided to use both free and fixed recall approaches. Using data from an extensive document analysis, we identified 391 STEM professionals for our roster approach. We asked respondents to categorize this list by people they knew and worked with. Next, the free recall section allowed respondents to list professionals they rely on most to accomplish their STEM work and their level of weekly communication – generating 483 additional names not identified with the roster approach. Both strategies allowed us to measure the potential and actual collaboration along the lines of the well-known network of STEM professionals (roster) and individual’s local networks (free recall).

Lopez KemisB

Lessons Learned: The data offered compelling information for both regional and statewide use. Centrality measurements helped identify regional players that had important network positions but were underutilized. Network diameter and clique score measurements informed the executive council of overall network health and specific areas that require initiative resources.

Lessons Learned: Most importantly, the SNA data allowed the initiative to see beyond the usual go-to stakeholders. With a variety of SNA measurements and our three variables, we have been successful in identifying a diverse list of stakeholders while offering suggestions of how to trim down the networks’ size without creating single points of fracture. SNA has been an invaluable tool to classify formally and evaluate the logistics of key STEM players. We recommend other STEM initiatives interested in using SNA to begin identifying a roster of collaborators early in the development of their initiative.

The American Evaluation Association is celebrating Social Network Analysis Week with our colleagues in the Social Network Analysis Topical Interest Group. The contributions all this week to aea365 come from our SNA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hi, I’m Rebecca Woodland, an Associate Professor of Educational Leadership at UMass Amherst. If there is one thing that I know for certain it’s that relationships matter and how we are connected influences the quality and outcomes of our shared endeavors. Social Network Analysis (SNA) has had a profound influence on my evaluation work. I want to introduce and encourage evaluators (who may not know much about SNA) to consider integrating it into their own practice.

Simply put, SNA is all about telling the story of how “ties” between people or groups form, and how these “links” may influence important program objectives and outcomes. With SNA you can mathematically describe and visually see connections between people. You can use SNA to explain and predict how ties between “actors” influence the attainment of program goals.

Hot Tips: Evaluators can use SNA to address a wide-range of pressing program evaluation questions such as these:

  1. Want to know whether a program has the capacity to spread a new or novel intervention? SNA was used to evaluate school-level capacity to support or constrain instructional innovation.
  2. Want to know how large, inter-agency partnerships develop and how inter-agency collaboration correlates with intended program outcomes? Evaluators used SNA to track the development and impact of a Safe Schools/Healthy Students inter-agency community mental health network.
  3. Want to know who influences the budgeting and disbursement of funds for advocacy programs in fragile environments? SNA was used to map the flow of resources and funding patterns for new-born survival activities in northern Nigeria.

Lesson Learned: Possibly the biggest wow factor is that SNA enables the creation of illustrative visuals that display complex information, such as intra-organizational communication flow and the location of network “brokers,” “hubs,” “isolates” and “cliques”, in user-friendly ways.

WoodlandImage via under Creative Commons Attribution 3.0 License

Rad Resources

  • ®Visualyzer is an easy to use program (with a 30-day free trial) that enables you to create socio-grams on any network of interest to you.

The American Evaluation Association is celebrating Social Network Analysis Week with our colleagues in the Social Network Analysis Topical Interest Group. The contributions all this week to aea365 come from our SNA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Older posts >>

Archives

To top