AEA365 | A Tip-a-Day by and for Evaluators

CAT | Business, Leadership, and Performance

We are from the College of Education and Human Sciences, University of Nebraska-Lincoln, with Timothy Guetterman, the doctoral student, and Delwyn Harnisch, the Professor.

Mixed methods approaches can be useful in assessing needs and readiness to learn among professional workshop participants.  Combining qualitative and quantitative methods can enhance triangulation and completeness of findings.  We recently mixed methods while evaluating a weeklong workshop delivered to medical educators in Kazakhstan and experienced how mixing can aid evaluation activities.

The international collaboration between teams in the U.S. and Kazakhstan presented challenges that we mitigated through technologies, such as email, Skype, and Dropbox.  Surveys administered before, during, and after the workshop through an online tool, Qualtrics, were important to guide implementation, continually assess learning, and understand the participant’ perspective.

Hot Tips:

  • Guiding Implementation. Mixed methods within the needs and readiness assessment served a formative purpose, helping us tailor the workshop to specific participant needs.  Mixed methods analyses yielded rich details about what participants wanted and needed that would be difficult to anticipate with a quantitative instrument.  Online surveys presented a way to connect with participants early.  Beyond quantitative scales, we asked questions (e.g., “What do you hope to learn?”).  Because data were immediately available, findings guided the workshop implementation.
  • Continually Assess Learning. Throughout the workshop, brief (about one minute) surveys at the end of the day helped to gauge understanding of where participants are to develop the community of learners.  Providing a daily survey solicited brief qualitative responses from items (e.g., “Summarize in a few words the most important point from today”; “What point is still confusing?”).  The questions provided valuable information but only took minutes to complete.
  • Understand the Participants’ Perspectives. In the summative evaluation of the workshop, mixed methods allowed us to obtain participant ratings and gain understanding of what participants learned through open-ended qualitative questions.

Lessons Learned:

  • With the use of these tools, we were able to model in this workshop a process for developing a deep and practical understanding of assessment for learning.  With the leaders at this program sharing at their sites, we are beginning to see the vehicle of site-based teacher learning communities.  Each of these sites is using two or three techniques in their own classrooms and then meeting with other colleagues monthly to discuss their experiences and to see what other teachers are doing.
  • The result of this effort is that these teacher learning communities now develop a shared language enabling them to talk to one another about what they are doing.
  • In short, the use of mixed methods allows the team to focus on where the learners are now, where they want to go, and how we can help them get there.

The American Evaluation Association is celebrating the Business, Leadership, and Performance TIG (BLP) Week. The contributions all week come from BLP members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello evaluators, my name is Sid Ali and I am Principal Consultant at Research and Evaluation Consulting.  I do much of my work in education and training settings, and this often takes me into the corporate, environment.

I have found that there is great benefit to both the evaluator and the client in using tried and tested multi-step methods for evaluation management, especially if the client organization does not have a culture familiar with evaluation methods and use.  These multi-step methods are often used in public health and human services evaluations, but can be easily transferred to the corporate setting with some elbow grease.

Corporate organizations that have primarily used performance measurement to monitor programs require a familiarization with the evaluative process.  The US GAO has a nice description of the relationship between evaluation and performance measurement that can help you communicate such to your clients.  This familiarization can take many forms, but preparing a primer and distributing is not the approach I would recommend.  Here’s where the multi-step methods come into play, as much focus in what I call the “orientation” phase of the evaluation is placed on building relationships with key players in the evaluation management from the corporation’s side.  Understanding the historical context of the organization and the program is crucial at this phase as well.

Multi-step methods for evaluation management also help the evaluator and client by establishing an evaluation activity sequence or road map that is shared with the organization in the “orientation” phase with the caveat that there may be changes to the route that was planned.  My experience in using the multi-step methods is that evaluation activities and results are better understood and both become more relevant within the client organization during the evaluation and in times post-evaluation as well.

Rad Resources:

The American Evaluation Association is celebrating the Business, Leadership, and Performance TIG (BLP) Week. The contributions all week come from BLP members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi, I am Robin T. Kelley and am an internal evaluator at a national nonprofit health organization that is funded by the Centers for Disease Control and Prevention to provide free capacity building assistance to HIV prevention organizations, health departments and their HIV planning groups.

In the HIV/AIDS field, there are a number of changes occurring; here are just a few major ones:  In 2010, there was the release of the U.S. National HIV/AIDS Strategy. All funded entities are now striving to align themselves with the major goals of this strategy. As of  2011, scientific studies that showed the effectiveness of adherence to HIV medicine in reducing the viral loads,  resources are placed into, biomedical interventions and the  emphasis is now placed  more on organizations conducting  high impact HIV prevention.

Lessons Learned:

One key method of building an organization’s ability to manage complex situations, particularly small organizations that serve vulnerable populations, or populations of color-is to strengthen their change management leadership skills.  Research has shown that in times of complexity, such as shifting federal and health priorities, organizations, businesses that serve minorities  often shut their doors first ,leaving underserved communities abandoned and without services.  To sustain these agencies, evaluators as well as program managers should be agile and flexible in understanding the community needs, their resources, staff strengths as well as weaknesses-to best manage the changes.

Hot Tips:

Here are some steps to take and useful tools to address HIV changes and changes in general:

1)     First, help the organization conduct an organizational diagnosis.  They must know what they have in order to consider what to change.

2)     Second, help the organization to conduct an environmental scan or asset mapping of their community to determine if there is still a need for their services.

3)     Then to help organizations to analyze the data.  Based on the findings, help the organization to do a SWOT analysis (an analysis of their strengths, weaknesses, opportunities and threats).  Depending on these findings,  perhaps  there is a way to merge efforts with another organization;

4)     Next, help the organizations communicate changes to all staff; without constant communication, rumors can fly and morale can sink.

5)     Finally, help the organization to create a process log so that they can record the number of new service requests and activities and to continue to justify their existence.

Rad Resources:

The American Evaluation Association is celebrating the Business, Leadership, and Performance TIG (BLP) Week. The contributions all week come from BLP members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Ellen Steiner, Director of Market Research and Evaluation at Energy Market Innovations, a research-based consultancy focused on strategic program design and evaluation for the energy efficiency industry – we work to create an energy future that is sustainable for coming generations.

Lessons Learned:

An increasingly common practice…

In energy efficiency program evaluations, telephone surveys are traditionally the mode of choice. However, there are many reasons that evaluators are increasingly interested in pursuing online surveys including the potential for:

(1) lower costs,

(2) increased sample sizes,

(3) more rapid deployment, and

(4) enhanced respondent convenience.

With online surveys, fielding costs are often lower and larger sample sizes can be reached cost-effectively. Larger sample sizes result in greater accuracy and can support increased segmentation of the sample. Online surveys also take less time to be fielded and can be completed at the respondent’s convenience.

Yet be aware…

In contrast, there are still many concerns regarding the validity and reliability of online surveys. Disadvantages of online surveys potentially include:

(1) respondent bias,

(2) response rate issues,

(3) normative effects, and

(4) cognitive effects.

Certain populations are less likely to have Internet access or respond to an Internet survey, which poses a generalizability threat. Although past research indicates that online response rates often are equal or slightly higher than that of traditional modes, Internet users are increasingly exposed to online survey solicitations, necessitating researchers employ creative and effective strategies for garnering participation. In addition, normative and cognitive challenges related to not having a trained interviewer present to clarify and probe which may lead to less reliable data.

Come talk with us at AEA!

My colleague, Jess Chandler and I will be presenting a session at the AEA conference titled “Using Online Surveys and Telephone Surveys for a Commercial Energy Efficiency Program Evaluation: A Mode Effects Experiment,” in which we will discuss the findings from a recent study we conducted comparing online to telephone surveys. We hope you can join us and share your experiences with online surveys!

Hot Tips:

  • Email Address Availability – In our experience, if you do not have email addresses for the majority of the population from which you want to sample, the cost benefits of an internet sample are cancelled out by the time spent seeking out or trying to purchase email addresses.
  • Mode Effects Pilot Studies – Where possible, conducting a pilot study using a randomized controlled design where two or more samples are drawn from the same population and each sample is given the survey in a different mode is a best practice to understand the potential limitations of an online survey specific to the population under study.

The American Evaluation Association is celebrating the Business, Leadership, and Performance TIG (BLP) Week. The contributions all week come from BLP members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · · ·

My name is Kate Rohrbaugh and I am Co-Chair of the Business, Leadership, and Performance TIG along with Michelle Baron.  I’m a Research Team Leader at a consulting firm in Virginia leading a group studying capital project organizations and teams in the process industries.  Today I’d like to talk about the renaming of our TIG and the tools we used to conduct this work.

When I accepted my current position five years ago, I had to rethink my AEA TIG membership because I had been a faithful member of educationally related TIGs, which were no longer relevant.  The number of TIGs at AEA can be overwhelming at times, but it also offers a wide variety of “homes” to evaluators regardless of the content area.  In my new position I turned to the Business and Industry TIG where I found a small but dedicated group of professionals.  I “lurked” with this group for a year, and within a short time (since it was a smaller group), I was able to take an active role in the leadership of this TIG.

In discussions with the leadership of the TIG and at AEA, we determined that the name of the TIG was unnecessarily limiting both presenters and audience – evaluation issues in for-profit organizations are relevant to a wide variety of evaluation professionals in both private and public sectors.  For this reason, we canvassed the membership and working closely with the AEA staff and board, identified a new name for our TIG.

Rad Resources

  • AEA maintains a list of members in each TIG and faithfully protects AEA membership from unnecessary contact, but this was a great source for contacting our membership about the desire to change the name of the TIG and solicit ideas for renaming the TIG.
  • To canvass our membership, we turned to the old faithful Survey Monkey which met our simple needs for collection and analysis.
  • To discuss the results with the TIG leadership located across and outside the United States, we turned to FreeConferenceCall.com, which is exactly what you think it is.

We are excited about the AEA 2012 in Minneapolis and hope to see lots of new faces at our presentations and business meeting!

The American Evaluation Association is celebrating the Business, Leadership, and Performance TIG (BLP) Week. The contributions all week come from BLP members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

Greetings from Kentucky, the land of national basketball championships.  My name is Vanessa Prier Jackson and I a professor of retailing in the School of Human Environmental Sciences in the department of Merchandising Apparel and Textiles at the University of Kentucky.

For the last three years I have become very interested in qualitative research in the development of small business development and sustainability.  I find it fascinating to go out in the community, meet with small business owners and find out firsthand what really influences their business resiliency.  Through readings and research I have identified training, reciprocity between businesses and communities, and social responsibility as important methods of resiliency for small businesses in rural communities.

Being that qualitative research is new to me, I pondered how to proceed with my research. I had experienced many stumbling blocks as I walked through the processes of data collection on social responsibility. As we began to process the data, I wondered to myself, “what does this data offer to the literature, and how can I use the data to continue my research”? It was a humbling experience.

Lessons Learned:

  • I learned that qualitative research requires many steps to make sure your data is useful to the researcher and to the community you seek to serve. According to Carolyn Nicholls (2011) “When a piece of research is undertaken, there are many factors that need to be considered enroute to determining which method or methods will most suitably reveal the information or experience”.
  • Interest in a topic can lead to the collection of a great deal of information, but it is how you are able to use the information you collect. The question to address is, “will the information advance my research and will it be useful”? It is through careful evaluation of qualitative methodology that this can be assured.

Hot Tips:

  • Consider the importance of your research project and what it contributes to the literature. Will your methodology allow for the usefulness of information collected? Review your research ideas with experts in the area.
  • Seek out resources that will allow for valid secondary data. Library stacks can provide information that may not be available through interviews and focus groups, as well as previous research. Evaluate source locations for potential quantity of information available.
  • Content analysis may provide evidence for a specific topical focus of interviews and focus groups. This may also lead to the need for changes in the focus of a study and how that study should be conducted.
  • Consider major macro and micro environmental factors that may influence the data collected.

Resources:

The Advantages of using Qualitative Research Methods By Carolyn Nicholls BA (Hons) MSTAT, teacher of the Alexander Technique.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hello! We’re Judy Savageau and Linda Cabral from the Center for Health Policy and Research at the University of Massachusetts Medical School. Recently, we developed a journal club which convenes interested evaluators quarterly to discuss selected article(s) on evaluation research principles or methodologies. The journal club also networks parties from across our medical school’s multiple departments, centers, and campuses that have an interest in evaluation. For each one-hour session, a facilitator guides the discussion based on “questions to consider” which are developed and distributed with the selected reading(s) ahead of time. Some examples of the articles we’ve discussed include:

  • Skolits GJ, Morrow JA and Burr EM. Reconceptualizing Evaluator Roles. American Journal of Evaluation 2009;30(3):275-295.
  • Smith NL. An Analysis of Ethical Challenges in Evaluation. American Journal of Evaluation 2002:23(2):199-206.
  • Morris M. The Good, the Bad and the Evaluator: 25 Years of AJE Ethics. American Journal of Evaluation 2011;32(1):134-151.
  • Cohen DJ and Crabtree BF. Evaluative Criteria for Qualitative Research in Health Care: Controversies and Recommendations. Annals of Family Medicine 2008;6(4):331-339.

Skolits, Morrow and Burr’s article, one of the AJE’s most popular articles of 2010, generated some interesting discussion about the different roles which journal club members have assumed as evaluators along with the benefits and challenges those roles entailed. We ask participants to consider the relevance and applicability of the selected reading to their work, then end each journal club with a short debriefing session, taking suggestions for articles and recruiting facilitators for future journal club meetings.

Hot Tips:

  • When coordinating with people from different sites, schedule the journal club toward the end of the work day so that participants don’t have to return to their office.
  • Alternate the location of the journal club meetings among participants’ sites/campuses to share the travel burden as well as be able to visit new sites and meet potential new collaborators.
  • Provide light refreshments to maintain an informal atmosphere for lively discussion.
  • Keep the group size fairly small (10-12 people) to ensure active participation among group members.

Rad Resource:

  • Use Doodle (http://doodle.com) to identify the best dates/times for people to meet.

Lesson Learned: While there can be challenges in bringing together individuals from different sites, these are balanced by the benefits of getting to know the work of others and learning about the methodologies and strategies they’ve used with varying projects, clients, stakeholders and funding sources. It can also offer opportunities for identifying new evaluation projects to work on together.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello. I’m John Wedman, an administrator at the University of Missouri and former business owner. One of our business services was evaluating corporate training programs. Not surprisingly, some programs fell short of producing the desired results. Was the training lacking or was something else going on?

Answering that question led us to develop the Performance Pyramid, a framework and process for identifying, analyzing, and addressing the barriers to optimal performance in the workplace. The Pyramid is particularly helpful when conducting Level 4 evaluations where the focus is on results – the effects on the organization resulting from trainee performance.

The Performance Pyramid is based on the premise that, in order to accomplish something of significance, Vision, Resources, and Support System must be in place and aligned. As shown in the model, the organizational culture influences the support system. When the three major elements and the six support system blocks are adequate and aligned, and the change process is managed and monitored, goals can be achieved.

Hot Tips:

When a Level 4 evaluation indicates trainee performance in not having the desired effects, the Pyramid can guide efforts to find out why. Here are a few tips to help with the search. While the tips use a “new distance learning software platform” as an example, the tips can be applied to any performance improvement context.

  • The relative position of the Pyramid’s six Support System blocks is arbitrary; no one block is necessarily more important than another. For example, Knowledge & Skills related to using a new distance learning software platform are NOT more important than the Tools, Environment & Processes associated with the platform.
  • Each of the six blocks must be in place in order to achieve the desired outcomes. For example, if some instructors are not told they are using the new platform in a way that is inconsistent with the instructional approach adopted by the organization, Expectations & Feedback are not adequate.
  • The inter-relationship among the blocks is critical. For example, if a training program focuses on learning the new platform, but the new platform is more time and labor intensive than the old platform, there is a misalignment between Knowledge & Skills and Rewards, Recognition & Incentives.
  • The support system blocks are individually and collectively influenced by the organizational culture. To illustrate, if the organization highly values instructor autonomy, the culture will hinder efforts to create a more predictable distance learning experience across all courses. We all know the adage – “Culture eats strategy for breakfast every day.”
  • Finally, if a Level 4 evaluation indicates the effects on the organization are not as desired, it is quite likely Expectations have not been effectively communicated, and Feedback is sporadic and uninformative.

For more information about the Performance Pyramid and a free copy of the Pyramid Guide, please visit here.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello! I’m Elizabeth Rupprecht, an Industrial Organizational Psychology graduate student at Saint Louis University.  I would like to tell you more about a great resource for collecting national or international evaluation data—Amazon’s Mechanical Turk (mTurk). mTurk is normally used to provide organizations with assistance completing tasks. Typically, an organization will set up a “task,” such as transcribing one minute of audio. Then, mTurk posts this task for any interested mTurk “workers” to complete. After the organization reviews the work done, the worker is paid between one cent to a dollar, depending on the complexity and length of the task. In a recent article, researchers noted that mTurk provides I/O psychologists with a large and diverse sample of working adults from across the country for research on topics such as: crowdsourcing, decision-making, and leadership (Buhrmeister et al., 2011).  mTurk could also be useful for evaluations needing sizable and diverse samples. For example, in the case of policy analysis, mTurk could be used to read the pulse of American voters on specific governmental policies. For consumer-oriented evaluation, mTurk could be used to help researchers obtain a convenient, diverse, and large sample of consumers to assess products or services.

Rad Resource: Even though mTurk may seem too good to be true, research published in Judgment and Decision-Making has found that the range of participants found on mTurk are representative of the US population of Internet users. In addition, 70-80% of users are from the US (Paolacci et al., 2010).

Cool Trick: mTurk has its own survey tools, but it allows you to add a link to an external assessment tool, which increases speed and allows for advanced functionality—such as the ability to export directly into third-party statistics programs (SPSS, SAS, Excel, etc).

Hot Tip: As my colleague, Lacie Barber, discussed in her aea365 contribution, implementing quality control checks in surveys can help improve the quality of data. In my experience using mTurk, I have found that specification of your target population is necessary both in the mTurk advertisement/recruitment statement for the “workers” and in the actual survey. Weeding out participants who overlook your specifications in the advertisement is vital! If the “workers” do not follow your specifications, or do not complete their “task,” (i.e. your survey) you do not need to pay them.

Only time will tell if mTurk becomes a highly used engine for social science and evaluation research, but at this moment, it seems like the hot new type of convenience sample!

Buhrmeister, M., et al. (2011). Amazon’s mechanical turk: A new source of inexpensive, yet high-quality data? Perspectives on Psychological Science, 6(3). 3-5.

Paolacci, G., et al. (2010). Running experiments on Amazon mechanical turk. Judgment and Decision Making, 5(5). 411-419.

The American Evaluation Association is celebrating Society for Industrial & Organizational Psychology (SIOP) Week with our SIOP colleagues. The contributions all this week to aea365 come from our SIOP members and you may wish to consider subscribing to our weekly headlines and resources listwhere we’ll be highlighting SIOP resources. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

·

My name is Stephen Axelrad.  I am a Human Capital consultant at Booz Allen Hamilton.  For almost a decade, in both my doctoral training and professional life, I have had one foot in the evaluation world and one foot in the industrial-organizational psychology (I-O) world.  Both disciplines use behavioral and social science research to solve real-world problems, improve organizational effectiveness, and enhance quality of life.  I am struck how little collaboration or even awareness the I-O and evaluation disciplines have with each other.  Both disciplines have so much to offer each other.

Hot Tip: Evaluating HR initiatives. If you want to make an HR practitioner squirm, ask them to demonstrate the merit or worth of their programs and services.  Sophisticated HR professionals could provide you dashboards and balanced scorecards linking their initiatives with bottom-line indicators relating to mission impact or profits. Most of the time, the only thing you see that resembles evaluation are training evaluations that follow the Kirkpatrick Level I-V framework.  Most I-O interventions lack a robust evaluation component to evaluate the effectiveness of implementing the intervention.  When, senior leaders ask “So what?” to justify the investments of time and resources, I-O professionals are not doing enough to help HR professionals answer that question.  Evaluators can play a key role in helping I-O professionals design and conduct independent practical, useful, and rigorous evaluations to accompany organizational change and improvement interventions.

Hot Tip: Making individual-level evaluation more robust. The evaluation field spends a lot of time at the organizational and programmatic level.  Many evaluations that assess the effectiveness of programs, policies, and initiatives at the individual level rely heavily on self-report, subjective measures (e.g., attitude surveys and focus groups).  Evaluators can expand their individual-level toolkits to include I-O psychological methods to obtain a more objective or comprehensive perspective as to how individual actions contribute to program and organizational effectiveness.  Competency models identify the underlying set of knowledge, skills, abilities, and attitudes that organizations can use to understand what makes an effective or talented employee.  Competencies are a missing element to many logic models.  Another missing piece in many evaluation efforts is having individual-level performance measures.  Many I-O professionals utilize performance feedback from supervisors, peers, direct reports, and customers/constituent to understand how employees are executing desired behaviors to key stakeholders within and outside of organizations.  Utilization of MSF in evaluations can provide evaluations with insights as to the quality of relationships that exist for members within a given organization or system and pinpoint some best and worst practices.

Rad Resource: I encourage you to read the book that got me started thinking about this topic.

Darlene Russ-Eft & Hallie Preskill.  Evaluation in organizations: a systematic approach to enhancing learning.

The American Evaluation Association is celebrating Society for Industrial & Organizational Psychology (SIOP) Week with our SIOP colleagues. The contributions all this week to aea365 come from our SIOP members and you may wish to consider subscribing to our weekly headlines and resources listwhere we’ll be highlighting SIOP resources. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

· · ·

<< Latest posts

Older posts >>

Archives

To top