AEA365 | A Tip-a-Day by and for Evaluators

TAG | STEM

I’m Andrew Hayman, Research Analyst for Hezel Associates. I’m Project Leader for Southern Illinois University Edwardsville’s National Science Foundation (NSF) Innovative Technology Experiences for Students and Teachers (ITEST) program, Digital East St. Louis.

The ITEST program was established in 2003 to address shortages of technology workers in the United States, supporting projects that “advance understanding of how to foster increased levels of interest and readiness among students for occupations in STEM.” The recent revision of the ITEST solicitation incorporates components of the Common Guidelines for Education Research and Development to clarify expectations for research plans, relating two types of projects to that framework:

  • Strategies projects are for new learning models, and research plans should align with Early-Stage, Exploratory, or Design and Development studies.
  • Successful Project Expansion and Dissemination (SPrEaD) projects should have documented successful outcomes from an intervention requiring further examination and broader implementation, lending SPrEaD projects to Design and Development or Impact studies.

Integration of the Common Guidelines into the NSF agenda presents opportunities for evaluators with research experience because grantees may not possess internal capacities to fulfill research expectations. Our role in a current ITEST Strategies project includes both research and evaluation responsibilities designed to build our partner institution’s research capacity. To accomplish this, our research responsibilities are significant in Year 1 of the grant, including on-site data collections, but decrease annually until the final grant year, when we serve as a research “critical friend” to the grantee.

I presented at a recent ITEST conference about our role in research and evaluation activities for an audience primarily of evaluators. As expected, some questioned whether we can serve in dual roles effectively while others, including NSF program officers, were supportive of the model. Differences in opinion regarding research responsibilities amongst ITEST stakeholders suggest it may take time for evaluators to carve out a significant research role for ITEST. However, NSF’s commitment to rigorous research as framed by the Common Guidelines, coupled with the limited research capacity of some institutions, suggests possibilities for partnerships.

Lesson Learned:

  • Define research responsibilities clearly for both the institution and evaluators. Separation of research and evaluation activities is critical, with separate study protocols, instruments, and reports mapped out for the entire project. A third-party may be required to evaluate the research partnership.

Rad Resource:

The American Evaluation Association is celebrating Research vs Evaluation week. The contributions all this week to aea365 come from members whose work requires them to reconcile distinctions between research and evaluation, situated in the context of STEM teaching and learning innovations.. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Greetings, evaluation professionals! Kirk Knestis, CEO of Hezel Associates, back this time as guest curator of an AEA365 week revisiting challenges associated with untangling purposes and methods, between evaluation and research and development (R&D) of education innovations. While this question is being worked out in other substantive areas as well, we deal with it almost exclusively in the context of federally funded science, technology, engineering, and math (STEM) learning projects, particularly those supported by the National Science Foundation (NSF).

In the two years since I shared some initial thoughts in this forum on distinctions between “research” and “evaluation,” the NSF has updated many of its solicitations to specifically reference the then-new Common Guidelines for Education Research and Development. This is, as I understand it, part of a concerted effort to increase emphasis on research—generating findings useful beyond the interests of internal project stakeholders. In response, proposals have been written and reviewed, and some have been funded. We have worked with dozens of clients, refined practices with guidance from our institutional review board (IRB), and even engaged external evaluators ourselves when serving in the role of “research partner” for clients developing education innovations. (That was weird!) While we certainly don’t have all of the answers in the complex and changing context of grant-funded STEM education projects, we think we’ve learned a few things that might be helpful to evaluators working in this area.

Lesson Learned: This evolution is going to take time, particularly given the number of stakeholder groups involved in NSF-funded projects—program officers, researchers, proposing “principal investigators” not researchers by training, external evaluators, and perhaps most importantly the panelists who score proposals on an ad hoc basis. While the increased emphasis on research is a laudable goal—as the NSF merit criterion of furthering “Intellectual Merit”—these groups are far from consensus about terms, priorities, and appropriate study designs. On reflection, my personal enthusiasm and orthodoxy regarding the Guidelines put us far enough ahead of the implementation curve that we’ve often found ourselves struggling. The NSF education community is making progress toward higher quality research but the potential for confusion and proposal disappointment is still very real.

Hot Tip: Read the five blogs that follow. Delve more into the nuances of what my colleagues are collectively learning about how we can improve our practices in the context of evolving operational distinctions between R&D and external program evaluation of STEM education innovations. This week’s posts explore what we *think* we’re learning across three specific popular NSF education programs, in the context of IRB review of our studies, and where the importance of dissemination is concerned. I hope they are useful.

The American Evaluation Association is celebrating Research vs Evaluation week. The contributions all this week to aea365 come from members whose work requires them to reconcile distinctions between research and evaluation, situated in the context of STEM teaching and learning innovations.. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello, Talbot Bielefeldt here!  I’m with Clearwater Program Evaluation, based in Eugene, Oregon. I have been doing educational program evaluation since 1995. My clients include all levels of education, from Kindergarten to graduate school, with an emphasis on STEM content and educational technology.

When I started out as an evaluator, I knew I was never going to do assessment. That was a different specialty, with its own steep learning curve. Furthermore, I worked with diverse clients in fields where I could not even understand the language, much less its meaning. I could only take results of measures that clients provided and plug them into my logic model. I was so young.

Today I accept that I have to deal with assessment, even though my original reservations still apply. Here is my advice to other reluctant testers.

Hot Tip: Get the program to tell you what matters. They may not know. The program may have been funded to implement a new learning technology because of the technology, not because of particular outcomes. Stay strong. Insist on the obvious questions: (“Demonstrably improved outcomes? What outcomes? What demonstrations?”) Invoke the logic model if you have to (“Why would the input of a two-hour workshop lead to an outcome like changing practices that have been in place for 20 years?”) Most of all, make clear that what the program believes in is what matters.

Get the program to specify the evidence. I can easily convince a science teacher that my STEM problem-solving stops around the level of changing a light bulb. It is harder to get the instructor to articulate observable positive events that indicate advanced problem solving in students. Put the logic model away and ask the instructor to tell you a story about success. Once you have that story, earn your money by helping the program align their vision of success with political realities and the constraints of measurement.

Lesson Learned: Bite the intellectual bullet and learn the basics of item development and analysis. Or be prepared to hire consultants of your own. Or both. Programs get funded for doing new things. New things are unlikely to have off-the-shelf assessments and psychometric norms.

Lesson Learned: Finally, stay in touch with evaluation communities that are dealing with similar programs. If you are lucky, some other reluctant testers will have solved some of your problems for you. Keep in mind that the fair price of luck in this arena is to make contributions of your own.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Lopez KemisA

Hello from Andres Lazaro Lopez and Mari Kemis from the Research Institute for Studies in Education at Iowa State University. As STEM education becomes more of a national priority, state governments and education professionals are increasingly collaborating with nonprofits and businesses to implement statewide STEM initiatives. Supported by National Science Foundation funding, we have been tasked to conduct a process evaluation of the Iowa statewide STEM initiative in order to both assess Iowa’s initiative and create a logic model that will help inform other states on model STEM evaluation.

While social network analysis (SNA) has become commonly used to examine STEM challenges and strategies for advancement (particularly for women faculty, racial minorities, young girls, and STEM teacher turnover), to our knowledge we are the first to use SNA specifically to understand a statewide STEM initiative’s collaboration, growth, potential, and bias. Our evaluation focuses specifically on the states’ six regional STEM networks, their growth and density over the initiatives’ years (‘07-’15), and the professional affiliations of its collaborators. How we translated that into actionable decision points for key stakeholders is the focus of this blog.

Lessons Learned: With interest in both the boundaries of the statewide network and ego networks of key STEM players, we decided to use both free and fixed recall approaches. Using data from an extensive document analysis, we identified 391 STEM professionals for our roster approach. We asked respondents to categorize this list by people they knew and worked with. Next, the free recall section allowed respondents to list professionals they rely on most to accomplish their STEM work and their level of weekly communication – generating 483 additional names not identified with the roster approach. Both strategies allowed us to measure the potential and actual collaboration along the lines of the well-known network of STEM professionals (roster) and individual’s local networks (free recall).

Lopez KemisB

Lessons Learned: The data offered compelling information for both regional and statewide use. Centrality measurements helped identify regional players that had important network positions but were underutilized. Network diameter and clique score measurements informed the executive council of overall network health and specific areas that require initiative resources.

Lessons Learned: Most importantly, the SNA data allowed the initiative to see beyond the usual go-to stakeholders. With a variety of SNA measurements and our three variables, we have been successful in identifying a diverse list of stakeholders while offering suggestions of how to trim down the networks’ size without creating single points of fracture. SNA has been an invaluable tool to classify formally and evaluate the logistics of key STEM players. We recommend other STEM initiatives interested in using SNA to begin identifying a roster of collaborators early in the development of their initiative.

The American Evaluation Association is celebrating Social Network Analysis Week with our colleagues in the Social Network Analysis Topical Interest Group. The contributions all this week to aea365 come from our SNA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

I am Anna Douglas and I conduct evaluation and assessment research with Purdue University’s Institute for Precollege Engineering Research; also known as INSPIRE. This post is about finding and selecting assessments to use in evaluation of engineering education programs.

Recent years have seen an increase in science, technology, engineering, and mathematics (STEM) education initiatives and emphasis on bringing engineering learning opportunities to students of all ages. However, in my experience, it can be difficult for evaluators to locate assessments related to learning or attitudes about engineering. When STEM assessment instruments are found, oftentimes they do not include anything specifically about engineering. Fortunately, there are some places devoted specifically to engineering education assessment and evaluation.

Rad Resource: INSPIRE has an Assessment Center website, which provides access to engineering education assessment instruments and makes the evidence for validity publicly available. In addition, INSPIRE has links to other assessment resources, such as Assessing Women and Men in Engineering, a program affiliated with Penn State University.

Rad Resource: ASSESS Engineering Education is a search engine for engineering education assessment instruments.

If you don’t find what you are looking for at the INSPIRE, AWE, or ASSESS databases, help may still be there.

Lesson Learned #1: If it is important enough to be measured for our project, someone has probably measured it (or something similar) before. Even though evaluators may not have access to engineering education or other educational journals, one place to search is Google Scholar with keywords related to what you are looking for.  This helps to 1) locate research being conducted in the similar engineering education area (and they may have used some type of assessment) and 2) locate published instruments, which one would expect has a degree of validity evidence.

Lesson Learned #2: People that develop surveys, generally like others to use them. It’s a compliment. It is ok to contact the authors for permission to use the survey and validity evidence collected, even if you can not access the article.  At INSPIRE, we are constantly involved in the assessment development process. When someone contacts us for use of an instrument, we view that as a “win-win”… the evaluator gets a tool, our instrument gets used, and with the sharing of data and/or results, we can get further information about how the instrument is functioning in different settings.

Lessons Learned #3: STEM evaluators are in this together. Another great way to locate assessment instruments is to post through the STEM RIG in LinkedIN, or pose the question to the EvalTalk listserv. This goes back to Lesson Learned #1: most of the important outcomes are being measured by others.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I’m Kim Kelly, PhD, from the Psychology Department at the University of Southern California where I teach undergraduate courses in statistics, research methods, psychobiology and human development. I have been involved in the evaluation of STEM (Science, Technology, Engineering and Mathematics) curriculum and professional development programs since 2002. These courses and projects are focused on improved student learning and span informal science settings, elementary, secondary, and post-secondary levels. I have come to appreciate, as I’m sure many of you do, the enormous influence of national curriculum efforts such as Common Core Standards and New Generation Science Standards as well as policy efforts to streamline and consolidate the funded STEM education portfolio across federal funding agencies.

Rad Resources: I really recommend National Research Council publication A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas to understand what is motivating the design of the Next Generation Science Standards

Like the Common Core Standards for Mathematics and English Language Arts they serve as a blueprint for states to follow in aligning their STEM education standards in the coming decade.

The National Science and Technology Council Committee committee on STEM Education has been initially charged to inventory Federal STEM education activities and develop a 5-year strategic Federal STEM education plan. In their most recent progress report, they discuss activities focused on evaluation guidance and common metrics and evidence standards for inclusion in the strategic plan.

The report also states that an evaluation interagency working group will be created to support agency efforts to develop and carry out evaluation strategies.” One such group has already formed among evaluators of three climate change education programs funded by the National Science Foundation, the National Aeronautics and Space Administration, and the National Oceanic and Atmospheric Administration. This tri-agency evaluation working group has formulated a common logic model for the collective portfolio of climate education projects and is currently seeking feedback from the AEA membership as well as program officers of the agencies in identifying next steps in evolving a common evaluation framework consistent with the emerging federal strategic plan. Contact Committee Chair Ann Martin at ann.m.martin@nasa.gov to learn more and get involved in this timely effort.

Hot Tip: The Potent Presentations Initiative (p2i) is an AEA-sponsored effort to help evaluators improve their presentation skills. As you get ready to prepare a presentation for Evaluation 2013, visit the p2i website for ideas.

The American Evaluation Association is celebrating STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to aea365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Disa Cornish, PhD. I am the Program Evaluation Manager at the Center for Social & Behavioral Research (CSBR) at the University of Northern Iowa (UNI). I coordinate the Iowa STEM Monitoring Project. The purpose of the Monitoring Project is to systematically observe a series of defined metrics and sources to examine changes regarding STEM education and economic development in Iowa. In particular, I work alongside the Governor’s STEM Advisory Council, a group of stakeholders in STEM education and economic development from across the state. The STEM Monitoring Project includes four primary components:

1) The Iowa STEM Indicators System (ISIS) to track publicly available data related K-16 STEM education and the STEM workforce pipeline;

2) A statewide survey of public attitudes toward STEM, to be conducted annually;

3) The statewide STEM student interest inventory added to the annual Iowa Assessments; and

4) Regional/Scale-Up Program process and outcomes data collection and analysis.

Lessons Learned: Collaboration is key. Evaluation of large-scale projects involves a lot of (rapidly) moving parts. When conducting evaluation of a statewide initiative, there are many strands to keep track of in terms of methods, sources of data, analysis, and dissemination strategies. The Iowa STEM Monitoring Project is a collaborative effort between partners at three different universities. We are all responsible for portions of the Monitoring Project and we are successful because of frequent, high-quality communication. In addition, I reached out to evaluators of other state STEM initiatives about their work. Having a network of supportive colleagues who were grappling with some of the same issues was very helpful.

Know the field. In order to know what indicators would be helpful in a statewide STEM monitoring project, we needed to know what national indicators were already being measured and tracked. What were other states doing

Rad Resource: With the rapid evolution of STEM evaluation (and STEM education programming), it’s important to stay current. STEMConnector.org is a fantastic resource. Their tagline is “the one stop shop for STEM information” and it’s quite true. There are state-by-state guides to STEM initiatives and programs, and news from the world of STEM education.

Change the Equation is an organization that works with the business community to improve STEM education. Their site has a wealth of information related to STEM education, including state-specific data. I especially like their Design Principles for Effective STEM Philanthropy and their Design Principles Rubric.

Hot Tip:

At Evaluation 2013, a new type of session will be offered. Birds of a Feather Gatherings (aka idea exchanges or networking tables) are a chance for attendees to share ideas and learn from one another. There is no formal presentation, but there is a designated facilitator to get the conversation started.

The American Evaluation Association is celebrating STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to aea365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, I’m Cindy Tananis, founder and Director for the Collaborative for Evaluation and Assessment Capacity (CEAC) at the School of Education in the University of Pittsburgh.  CEAC has a long history of STEM education related evaluation work in P-12 and Higher Education.  Stop by and visit our website to learn more about CEAC.

STEM education reform focuses on LEARNING.  Some of the beginning logic models that theorize about this impact look deceptively simple:

Resources to change (teacher knowledge and skill, context for learning, curriculum, parental support, or any number of other intervening variables) lead to change in teaching, environmental support, learning materials, parental involvement, etc., and result in increased learning as measured by some form of achievement.

These “simple” logic models get very complex in the reality of schools and other learning organizations. There are MANY reasons for the complexity and plenty of literature about how complex relationships, processes, and structures of education really are!

Lessons Learned:

Partnerships

  • Are fluid, and depend on the ebb and flow of commitment and needs of the members.
  • Are unique people-driven relationships.
  • Committed partners = greater learning and greater change

Student Learning

  • Increased learning in math and science can be documented.
  • Attributing learning to specific interventions is challenging.
  • Engaged learning is accomplished through pedagogical change including participation plus buy-in.

Instructional/Institutional Change

  • Content understanding and knowledge are essential to pedagogical change, but alone are not enough.
  • Requires a willingness to take risks and be supportive of experimentation among professionals.
  • Collaboration across teachers and administrators is necessary to extend and sustain change.
  • Information must be accessible, relevant, meaningful, and applicable to the educational process.
  • Advocacy begins with individual change and grows across peer networks.
  • Dynamic continuous support systems create sustainable programs.

Educational Change

  • Existing P-16 structures and culture are resistant to sustainable change therefore continuous focused effort is needed to create systemic change
  • Institutional culture and individual beliefs and behaviors characterize a propensity for reform and are co-dependent for sustainable change.
    • Individuals and systems with the most need are the hardest to serve, often reflecting both a reform potpourri in schools and reform fatigue among educators.

Rad Resources: A more extended report explores these “lessons learned” with detail.  We have also published some additional thinking about these issues at our website.

Hot Tip:

There will be many different types of sessions at Evaluation 2013. Presenters are not limited to choosing between an oral presentation or a poster presentation. There are panels, roundtables, think tanks, ignite presentations, paper/multipaper sessions, and demonstrations, to name a few. Find out more about the types of sessions and what might be best for your proposal at the AEA website.

The American Evaluation Association is celebrating STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to aea365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Alyssa Na’im, Senior Research Associate at the Education Development Center, Inc., and I work on program evaluations of STEM (science, technology, engineering, and mathematics) education programs.  The goal of many STEM education programs is to diversify the STEM workforce by engaging populations who are traditionally underrepresented in these fields – racial/ethnic minorities, females, and individuals with disabilities. Just as there should be special attention given to the design and implementation of STEM learning activities to motivate and engage these targeted populations, the same level of care should be shown to the evaluations of such programs.

Lesson Learned: Being responsive to various cultures and cultural experiences requires the evaluator to understand the context of the program under consideration and use appropriate methods and tools in the evaluation.

There is well-established evidence that points to the value of using culturally and contextually responsive evaluation practices in STEM education programs. Designing and implementing an evaluation that is not sensitive to the culture and context of the STEM program will likely yield information that is limited in its value to the program staff and other stakeholders.  For example, using an assessment that references dominant culture ideals may alienate certain groups that participate in STEM education programs.  Similarly, out-of-school time STEM programs have a certain culture where traditional assessments may be inappropriate because they are too formal or resemble the high-stakes testing setting that occurs during the school day.  The quality of the evaluation is directly related to the evaluator’s depth of understanding of the nature of the program and its participants.  Engaging participants with a commitment to understanding their individual and collective identities as well as the environment in which the program operates, better informs all phases of the evaluation from its design and implementation to the analysis, reporting, and use.

Rad Resources:

Hot Tip:

The Evaluation 2013 conference is open to evaluators from all over the world. International submissions are most welcome to share knowledge on practice, research, and theory. This year’s conference theme is Evaluation Practice in the Early 21st Century.

The American Evaluation Association is celebrating STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to aea365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I am Veronica Smith, principal of data2insight, an evaluation and research firm specializing in STEM program evaluation. We love working with organizations to design, develop and improve curricula for STEM teaching and learning. Examples of curricula we have evaluated include the NWABR Science and Ethics of Humans in Research, TechStart, and BioQuest Global Health Curriculum.

Lesson Learned: When answering questions aimed at gathering evidence of student learning, it is essential to carefully design the evauation to meet time and budget constraints, and to set stakeholder expectations regarding method limitations. Refer to What Works Clearinghouse standards as an aid for communicating the differing levels of rigor for different evaluation designs.

Rad Resource: Understanding by Design is a framework for improving student achievement that emphasizes the teacher’s critical role as a designer of student learning. We have found the UdB text and professional development workbook to be assets in design and development of standards-driven curricula. The UbD approach helps teachers clarify learning goals and devise revealing formative and summative assessments.

Capture1

Hot Tip: Engaging STEM organizations early in conversations RE: evaluation alongside program and/or grant proposal development improves evaluation quality and stakeholder satisfaction and use. We offer evaluation plan development prior to submission of grant applications as a free business development service. We craft a letter of understanding indicating that if the grant is funded, our firm will be hired as the evaluator for that project. This upfront work saves time and money.

Lesson Learned: Whether a curriculum gets into the hands of teachers who can put it to work to improve STEM teaching and learning is largely dependent on where the digitial version of the curriculum lives once the curriculum is published. Program leaders are wise to develop a 1-3 year strategy for sustaining access to and updating curriculum products past the end of grant funding in order to expand their work’s reach and impact. Organizations like the American Chemistry Society track and monitor curricular resource use in order to increase and broaden the use of those resources.

Hot Tip: Northwest Association of Biomedical Research recently conducted a teacher survey asking about the best places to post and/or present curricula. National Science Teachers Assocation was one of the faves. NSTA’s website has a Freebies for Science Teachers page that might be a great location for your STEM curriculum.

Capture2

Hot Tip: Just a few more days until the March 15 proposal submission deadline for Evaluation 2013. If you are having a hard time deciding between two TIGs for your submission, you can suggest that the TIGs co-sponsor your session. Choose one primary TIG and add a comment in the “other information” box suggesting that the second TIG may also be interested.

The American Evaluation Association is celebrating STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to aea365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Older posts >>

Archives

To top