CAT | Research, Technology and Development Evaluation
My name is Teri Garstka and I am currently a Research Associate in the Institute for Educational Research and Public Service at the University of Kansas. We have a wide portfolio of research and evaluation projects in early childhood, child welfare and youth programs, and family services.
We work with lots of state, local, and community-based agencies. This means we also must work with the many types of Management Information Systems (MIS) used in those agencies to help us evaluate their programs and the families they serve. As we all know, data systems never “talk” to one another and true cross-systems data collection, analysis, and reporting can be quite challenging.
We needed a wonder tool that would:
- Integrate Data!
- Have the ability to bring together existing data from disparate agency management information systems into one handy place
- Link client-level data across systems and data collection methodologies
- Build the data system to include everything we need and nothing we don’t
- Secure Data!
- Keep data secure when collected and not stored on a mobile device or PC
- Reside on a HIPAA Compliant server to transfer Protected Health Information (PHI) data from agencies
- Collect Data!
- Need the ability conduct field interviews in natural environments such as in the home or community to build rapport and get rid of paper and pencil surveys
- Need to interface with emerging mobile technologies such as iPads and PC Tablets
- Need web-access surveys and multiple response formats
- Report Data!
- Need to export any of the data in multiple formats for analysis and reporting
- Calendar multiple data collection timepoints
- De-identify information as needed
Rad Resource: We found a great tool to help us integrate existing data from an agency’s information system with real-time data collection from families in the field. Created by Vanderbilt University with NIH funding, Research Electronic Data Capture (REDCap) is a web-based application that non-computer programmers can learn and use freely if their institution is a consortium member. After a few tutorials, you can learn to build a REDCap database for your project and customize it to fit the unique data system needs of almost any project. We like REDCap because it is so versatile and easy to use – it’s perfect for social scientists and evaluators looking for a more robust data collection system without having to hire a computer programmer for every project.
This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to email@example.com. Want to learn more from Teri? Teri and her colleagues will be presenting as part of the Evaluation 2011 Conference Program, November 2-5 in Anaheim, California.
Greetings from beautiful Boise! We are Rakesh Mohan and Bryon Welch from the Idaho legislature’s Office of Performance Evaluations.
Last February, the Idaho legislature asked us to evaluate the state’s new system for processing Medicaid claims. Legislators had received many constituent complaints that claims from Medicaid providers were being denied, delayed, or inaccurately processed. Legislators were beginning to question whether the new system would ever perform as intended.
The $106 million system went live in July 2010 and immediately began experiencing problems. At the time of our review, over 23,000 providers were enrolled in the system, which was processing about 150,000 claims each week.
Lessons Learned: Our review found that problems with processing provider claims were the result of unclear contract requirements, a lack of system readiness, and most importantly, the absence of adequate end user participation. Less than one percent of total providers were selected for a pilot test, but neither the state administrators nor the contractor knew how many claims were actually pilot tested. Further, only about 50 percent of the providers were enrolled when the system went live.
Hot Tip: If you are ever asked to evaluate the implementation of a large IT system that is experiencing problems, make sure you examine the end user involvement in the system’s design and implementation. Too often end user feedback is underappreciated, not used, or completely ignored.
Lessons Not Learned: Nearly ten years ago, Idaho attempted to implement a similar IT system to track student information for K-12 public schools. After spending about $24 million, the project was terminated due to undelivered promises and a lack of buy in from end users. Unfortunately, lessons identified in our evaluation of the failed student information systems were apparently not learned by people responsible for this new Medicaid claims processing system.
Hot Tip: Because the success of an IT system depends on end user buy-in, ask the following questions when evaluating the implementation of large IT systems:
1. Are end users clearly identified?
2. Are end user needs identified and incorporated into system objectives?
3. Do vendors clearly specify how their solutions/products will address system objectives and end user needs?
4. Is there a clear method for a two-way communication between system managers and end users with technical expertise?
5. Is there a clear method for regularly updating end users on changes and progress?
The American Evaluation Association is celebrating GOV TIG Week with our colleagues in the Government Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our GOV TIG members and you can learn more about their work via the Government TIG sessions at AEA’s annual conference. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Hello! We are Xin Wang, Neeley Current, and Gary Westergren. We work at the Information Experience Laboratory (IE Lab) of the School of Information Science & Learning Technologies at the University of Missouri. The IE lab is a usability laboratory that conducts research and evaluates technology. What is usability? According to Jakob Nielsen’s definition, usability assesses how easy user interfaces are to use. With the advancement of Web technology, in the past eight years, our lab has successfully applied a dozen of usability methods into the evaluation of educational and commercial Web applications. The evaluation methods that we have frequently used include: heuristic evaluation, think-aloud interviews, focus-group interviews, task analysis and Web analytics. Selecting appropriate usability methods is vital and should be based on the development life cycle of a project. Otherwise, the evaluation results would not be really useful and informative for the Web development team. In this post, we focus on some fundamental concepts regarding one of the most commonly adopted usability evaluation methods–Think-Aloud protocol.
Hot Tip: Use think-aloud interviewing! Think-aloud interviewing is used to engage participants in activities and then ask users to verbalize their thoughts as they perform the tasks. This method is usually applied during the mid or final stage of Website or system design.
Hot Tips: Employing the following procedures are ideal:
- Recruit real or representative users in order to comply with the User-Centric Design principles
- Select tasks based on frequency of use, criticality, new features, user complaints, etc.
- Schedule users for a specific time and location
- Have users operate a computer accompanied by the interviewer
- Ask users to give a running commentary (e.g., what they are clicking on, what kind of difficulty they encounter to complete the task)
- Have interviewer probe the user about the task s/he is asked to perform.
- When users verbalize their thoughts, evaluators may identify many important design issues that caused user difficulties, such as poor navigation design, ambiguous terminology, and unfriendly visual presentation.
- Evaluators can obtain users’ concurrent thoughts rather than just retrospective ones, so it may avoid a situation where users may not recall their experiences.
- Think aloud protocol allow evaluators to have a glimpse into the affective nature (e.g., excitement, frustration, disappointment) of the users’ information seeking process.
- Some users may not be used to verbalizing their thoughts when they perform a task.
- If the information is non-verbal and complicated to express, the protocol may be interrupted.
- Some users may not be able to verbalize their entire thoughts, which is likely because the verbalization could not keep pace with their cognitive processes–making it difficult for evaluators to understand what the users really meant.