AEA365 | A Tip-a-Day by and for Evaluators

TAG | usability testing

Hello! I am Carey Tisdal, Director of Tisdal Consulting in St. Louis, Missouri. I work with people who develop informal learning experiences for museum exhibitions, museum programs, documentary films, and media-based projects. Many of my projects include websites as one element of a learning system. I used the Building Informal Science Education (BISE) project as an opportunity to develop a framework to focus studies involving websites. This experience helped me improve my own practice by analyzing other evaluators’ work as well as connecting to key concepts in the website evaluation literature. I hope you find it useful, too!

I developed my website evaluation framework by analyzing 22 reports from the BISE database that were coded as “website” evaluands (i.e. the entity being evaluated). The overarching method I used to analyze the reports was Glaser & Strauss’ Grounded Theory. I then connected concepts in the program theory to literature about website evaluation. The resulting website evaluation framework uses high-level program theory to guide the identification of focus areas and questions to structure website evaluations. As illustrated in the graphic below, I organized seven of the major areas of consideration as a set of sequential, necessary steps influencing User Impacts and System Effectiveness. Read my whitepaper, “Websites: A guiding framework for focusing website evaluations,” to learn more!

Tisdal

Lessons Learned:

  • Some of the evaluations I reviewed focused on appeal (content, visuals, or forms of engagement), which is certainly a very important aspect of website evaluation. Yet, when connecting the focus areas, I realized that without testing usability, as well as appeal, it is not possible to draw strong conclusions about how audience impact is or is not accomplished.
  • Evaluating the system effectiveness of a website is essential in multiplatform projects. Awareness and access play important roles in whether or not users of other parts of an informal education system (e.g. an exhibition, program, or film) even get to the website, or, in turn, if website viewers see a film or attend an exhibition.
  • In my own work, I’ve found that this website framework helps project teams and website designers to clarify what they really need to know.

Rad Resources:

  • The U.S. Department of Health and Human Services offers an amazing set of resources to get you started in usability testing for websites. This site has been updated since I did my research and is now even better!
  • The BISE database and the website org provide access to a wide range of evaluation reports. When I need to look at how colleagues approached evaluation designs, they are my first stops!

The American Evaluation Association is celebrating Building Informal Science Education (BISE) project week. The contributions all this week to aea365 come from members of the BISE project team. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello, my name is Michael Lambur.  I am currently the Associate Director for Program Development with Virginia Cooperative Extension.  I have been an Extension evaluator since 1985 and recently spent five years as the Evaluation and Research Leader with the eXtension initiative.

Website usability testing measures the suitability of a website for its users and is directed at measuring the effectiveness, efficiency, and satisfaction with which users can achieve specified tasks while using the site.  Basically, a user is given a set of tasks to complete while being videotaped talking out loud about their experience.  In my time with eXtension, we conducted two usability tests of the initiative’s public website.  The first was done face-to-face by a contractor.  It was well done, informative, and rather expensive.  In the more recent one, we used an online usability testing service.  Again, it was well done, informative, amazingly inexpensive, and very timely.

Lessons Learned:

Online usability testing services do work.  We achieved essentially the same results using the online service versus face-to-face at a fraction of the cost.  Our face-to-face usability testing cost $12,000 using 12 participants.  Our online usability testing cost $280 using eight participants.  In addition, the online service provided trained testers based on demographics we provided.  And we received a video of the results in about an hour.

The key to usability testing is a set of tasks that reflects the purpose of the site.  You need to develop a set of very specific tasks the user can move through that truly reflects how you want them to understand and use the site.  For the online service, the tasks needed to be completed in 15 minutes, whereas the face-to-face lasted about 45 minutes.  The quality of feedback received from the online testing was excellent—you can achieve a lot in 15 minutes.

Usability test results can be brutal.  Be prepared to have your bubble burst when viewing the video of people using your website.  What we intend in developing a site and what people experience in using it can be very different.  While it typically isn’t all bad, the results are often eye-opening.  Keep in mind that the end result is an improved website that best serves your users.

Rad Resources:

The two online services we used were:

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Christine Paulsen and I own Concord Evaluation Group.  We evaluate media and technology-based initiatives. We regularly integrate usability testing with program evaluation to provide our clients with a more comprehensive picture of how their technologies and initiatives are performing.

As evaluators, you know that many of the programs that we evaluate today are technology-based. It is not uncommon for initiatives to provide information to their target audiences via websites, while other interventions are delivered with software applications to mobile, handheld or other devices. To properly evaluate such initiatives, the evaluator must consider the usability (user-friendliness and accessibility) of the technology components.

Usability refers to how easily users can learn and use technology.  It stands to reason that if a drop-out prevention program relies mostly on messages delivered via its website to change student behaviors, that website better be usable!  So, as evaluators, it’s crucial that we include usability assessment in our evaluations.  Usability testing (UT) methods enable us to not only gather important formative data on technological tools, UT methods also help us explain outcomes and impact during summative evaluation.

Hot Tip: Keep in mind that problems addressed early are much less expensive to fix than problems found later.

The typical UT is conducted in a one-on-one manner, with a researcher guiding the session.  The participants are provided with a list of tasks, which they will likely complete while thinking aloud. The researcher will record both subjective comments as well as objective data (errors, time on task). The test plan documents methods and procedures, metrics to be captured, number and type of participants you are going to test, and what scenarios you will use.  In developing UT test plans, evaluators should work closely with the client or technology developer to create a list of the top tasks users typically undertake when using the technology.

Hot Tip: Did you know that UT can be conducted in-person or remote (online)? While in-person testing offers a chance to observe non-verbal cues, remote testing is more affordable and offers the chance to observe a test participants in a more “authentic” environment—anywhere in the world.

Hot Tip: During formative testing, 6-8 users per homogenous subgroup will typically uncover most usability problems.  The sample size will increase if inferential statistics are needed.

Rad Resource: For a great overview of usability testing, including templates and sample documents, visit Usability.gov.

Rad Resource: For a demonstration of how to integrate UT into your evaluation toolbox, please stop by to see my presentation at AEA 2011.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Christine? She’ll be presenting as part of the Evaluation 2011 Conference Program, November 2-5 in Anaheim, California.

Greetings. I’m Joi Moore, Associate Professor in the School of Information Science & Learning Technologies at the University of Missouri.   I also serve as a faculty member for the Information Experience Lab. Teaching an online application development course, such as Flash animation, can easily become overwhelming for students and the instructors.  Students must acquire the core authoring techniques, while determining whether their animation projects are easy to use.  Learning occurs from constant trial and error, along with feedback from peers and the instructor. Developing a functioning project based on requirements is meaningless if the user has difficulty with interactions.

Hot Tip: By following a systematic evaluation process, students can obtain timely feedback throughout the development lifecycle of their animation project.  The following activities outline the overall process and the value for each activity.

1. Design Plan and Storyboard Review

During this activity, students obtain feedback on a visual prototype of their project.  They can make changes to the design based on feedback from peers and the instructor.  The result is a design plan and storyboard that can be used as a development guide for the project.

2. Version One Peer Evaluations

After students include all of the project requirements, they present their project to class peers for evaluation.  Students post a link to discussion board, and peers use a usability heuristic checklist to provide feedback.  The benefits of this activity are two-fold: 1) the feedback improves the project, and 2) students gain ideas from viewing other projects.

3. Version One Revisions

After all of the peer evaluations and instructor feedback, students can make changes to their project.  This will eliminate the occurrence of common usability issues discovered from your peers.  The goal is to eliminate simple issues and errors that can cause the target audience to have a negative attitude toward the project.

4. Usability Testing

Students perform usability testing with 3 members of the target audience.  All testing is performed with the same version of the animation project, which allows students to compare the results.  This phase of testing is very valuable, because the target audience might reveal issues that were not discovered during the peer evaluations.  Although students have gained experience in noticing usability and interaction issues, it is easy for evaluators to miss interaction issues when using the project from a different perspective than the target audience.

5. Final Version Revisions

After all of the usability testing, students can make final changes to the project before submission for a grade. The final product represents several iterations of evaluation that inform the student about interaction design.

By following the systematic evaluation process, students gain informal evaluation learning experiences.  In addition, the process assists the instructor with creating an online learning community of support and feedback.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Archives

To top