CAT | Distance Education and Other Educational Technologies
Hi. My name is A. Rae Clementz and in addition to being the co-chair of the Graduate Student and New Evaluator TIG, I am also a techie. I believe technology is of value when it helps us accomplish our goals in ways that are better, easier, and/or cheaper. I have evaluated several educational technology integration programs. Consistently one of the biggest barriers to successful implementation is teachers’ perceptions of the tool’s cost-benefit ratio. If the cost is too high, it’s a non-starter; the program or cool new toy will never fit in their school’s stretched budget. Even if the tool is free, if it’s too hard to use or doesn’t add some new or improved dimension to student learning, it’s not worth the effort.
I often feel similar time and budget constraints in my evaluations. Below are some cheap, efficient, and effective tools for two common evaluation tasks.
Rad Resource for conducting & recording interviews:
- Google Voice | I’m one of those people who only has a cell phone. To avoid burning minutes during the day, I make my calls with Google Voice. Google voice uses the internet connection on either your computer or cell phone to make calls. Bonus feature: incoming calls can be recorded, and Google Voice automatically creates a transcript and .mp3 recording of the call in your Google Voice Inbox!
- Skype + Evaer or Pretty May | Skype is one of the most common video and voice conferencing tools and its basic levels are free. Evaer and Pretty May are programs that record the voice and video feeds of Skype and save them out as either .mp3 or .wav files. Pretty May is free, as is the basic version of Evaer. Full version of Evaer is $20 with lifetime support and upgrades.
It is critical when recording anything that you inform everyone that you’re recording the call, for what purposes, and ask them if they agree to be recorded. Many states have laws prohibiting unauthorized recording of phone conversations.
Rad Resource for disseminating evaluation findings:
- Weebly + Scribd | Weebly is a simple, free, drag-and-drop, web-based, website design program. If you can use e-mail and PowerPoint, you can create a website using Weebly. Scribd is a free online publishing site. You can upload documents and either direct people to them on Scribd or embed them in websites or other social media sites.
Lesson Learned 1: Sadly, just because you built it doesn’t mean they’ll come. But having a website for the evaluation is still a good way to provide transparency, encourage comment from stakeholders, and disseminate findings to broader audiences. The process of building the site also promotes more organized communication about the evaluation.
This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to email@example.com. Want to learn more from Rae? She’ll be presenting as part of the Evaluation 2012 Conference Program, October 24-27 in Minneapolis, MN.
No comments · Posted by Susan Kistler in Assessment in Higher Education, Distance Education and Other Educational Technologies
My name is Vanora Mitchell and I am a professional independent evaluator working in Washington, DC.
How do we evaluate online learning? About a year ago, a school that I volunteer with asked me to help them to evaluate their new online learning initiatives. They were in the process of developing online courses and wanted to know how to identify success and if the evaluation process was different than for the traditional classroom. Here are three resources I found particularly useful as I did my background research:
Rad Resource: Evaluating Online Learning: Challenges and Strategies for Success: Written in 2008 by WestEd for the U.S. Department of Education, this 80 page report was my go-to guide. It had concrete examples, came from a reliable source, and was research-based.
Rad Resource: E-Learning Concepts and Techniques: This online collaborative ebook was developed in 2006 by a class at Bloomsburg University of Pennsylvania’s Department of Instructional Technology. The entire book is useful and Chapter 9 is devoted to E-Learning Evaluation.
Rad Resource: eLearn Magazine: Focusing mostly on the online classroom context, this magazine is web-based and free and full of articles that helped me to understand more about electronic learning. I read a number of background articles, but here are a few that were directly evaluation related:
- Online Learning Indicators
- Tutorial: ffectively Evaluating Online Learning Programs
- Best Practices: Measuring success: raid on Deerfield revisited
- And..here is a link to where you can readily search and find all of their articles tagged “Program Evaluation”
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to firstname.lastname@example.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
No comments · Posted by dgrodzicki in Distance Education and Other Educational Technologies, Mixed Methods Evaluation, Nonprofits and Foundations Evaluation, Organizational Learning and Evaluation Capacity Building, Prek-12 Educational Evaluation
Hello! We are Corinne Singleton, Linda Shear, and Savitha Moorthy. We work at the Center for Technology in Learning (CTL) at SRI International, a nonprofit research organization in Menlo Park, California. Through Innovative Teaching and Learning (ITL) Research, sponsored by Microsoft, we are examining innovative teaching practices across the globe—including the factors that support those practices and the impact those practices have for student learning. The research uses a distributed mixed-methods design to balance the simultaneous need for global comparability and local relevance. As the global evaluator, we are responsible for shaping the research design, creating universal instruments, coordinating the research across countries, and synthesizing results. We also work with a network of national partners who carry out the research within each participating country and make adaptations as necessary to tailor the research to their local context. One key challenge for ITL research is to define traditionally ambiguous concepts like “innovative teaching practices” and “21st century skills” in concrete ways—and then to objectively measure those constructs in ways that can be compared across vastly different contexts. Hot Tip: We use analysis of Learning Activities and Student Work (LASW) to define and measure innovative teaching and learning of 21st century competencies. Learning activities (the assignments that students are asked to do) and student work (the work they submit in response) can serve as objective artifacts of actual classroom practice. Careful analysis of these artifacts serves many purposes:
- Provides insight into what’s really happening in the classroom
- Provides specific metrics for measuring instructional change and learning outcomes
- Enables researchers to draw conclusions about teaching practices and the learning opportunities they afford for students
- Gives teachers a common language to discuss teaching practice
The LASW analysis relies on rubrics that elaborate core dimensions of 21st century skills to measure the extent to which learning activities call for, and student work demonstrates, innovative teaching and learning. The global team articulates construct definitions internationally, while individual country teams analyze LASW samples locally, allowing for a deeply contextualized understanding of each artifact. In each participating country, evaluators recruit and train experienced teachers to serve as coders, who use the rubrics to score samples of learning activities and student work along each dimension. Taken together, these scores represent the extent to which learning activities and student work products in a particular country demonstrate characteristics of innovative teaching and learning. To learn more about LASW visit the ITL website at: www.itlresearch.com. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
1 Comment · Posted by Susan Kistler in Assessment in Higher Education, Distance Education and Other Educational Technologies, Prek-12 Educational Evaluation
My name is Andrea Velasquez, and I am a doctoral student at Brigham Young University. For the last four years, I have been an instructor of an undergraduate class that teaches pre-service teachers how to use technology effectively in elementary and secondary education settings. One of the principal frameworks that we use to teach pre-service teachers how to distinguish between all the facets of designing effective instruction with technology is TPACK, or Technological Pedagogical Content Knowledge (Mishra & Koehler, 2006). This framework states that in any effective technology mediated instruction, technology, pedagogy and content are three components that not only co-exist, but also interact and have an impact on each other. The research examining TPACK can be useful to the field of evaluation by applying it to evaluations of technology-mediated instruction. Distinguishing between these three components- technology, instructional content, and instructional strategies- can help evaluators identify appropriate questions and alleviate the complexity of evaluating e-learning.
Hot Tip: When designing an evaluation of technology-mediated instruction, after determining context and stakeholders, consider technology, pedagogy and content as evaluands. Then, identify criteria and questions for judging each evaluand. Before continuing the evaluation, also identify criteria and questions that take into account how each component impacts the others. These questions should address the compatibility between these components. For example, if an online high school uses video technologies to communicate with students, an evaluation of such a program should take into account the video technologies, the strategies the teacher uses to teach the class (i.e. group work, field experiences, presentations), and the content of the instruction he is teaching. Besides addressing each of these three components, the evaluation should address the relationships that exist between each of these components at each stage of the evaluation process. This approach to evaluation ensures a more holistic evaluation of the technology use in relation to the context and the needs of the students and stakeholders.
Rad Resource: This site is a resource that is maintained by the developers of the TPACK framework. It has updated research articles and many other resources for understanding the practical applications of the framework http://www.tpck.org/tpck/index.php?title=Main_Page
This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Want to learn more from Andrea? She’ll be presenting as part of the Evaluation 2010 Conference Program, November 10-13 in San Antonio.