AEA365 | A Tip-a-Day by and for Evaluators

TAG | Extension

EEE TIG Week: Tom Archibald on Whose Extension Counts?

Hello, I’m Tom Archibald, Assistant Professor and Extension Specialist in the Department of Agricultural, Leadership, and Community Education at Virginia Tech.

Debates about what counts as credible evidence in program evaluation and applied social science research have been ongoing for at least 20 years.

Rad Resource: Those debates are summarized well in a very helpful book on this very question edited by Stewart Donaldson, Tina Christie, and Mel Mark. In particular, the book provides a balance of viewpoints from both proponents and detractors of the position that experimental approaches are the “gold standard,” the best route to credible evidence.

Even long before that, questions of how to generate valid knowledge of the world around us—and specifically the role of experimentation—animated the scientific and aristocratic classes alike. In Leviathan and the Air Pump, Simon Schaffer and Steven Shapin examined the debate between Robert Boyle and Thomas Hobbes over Boyle’s air-pump experiments in the 1660s, exploring acceptable methods of knowledge production and societal factors related to different knowledge systems.

The point of this post is this: Seemingly esoteric methodological debates about credible evidence are in fact fundamentally important political questions about life. This point is summed up by Bill Trochim and Michael Scriven, who said, respectively:

“The gold standard debate is one of the most important controversies in contemporary evaluation and applied social sciences. It’s at the heart of how we go about trying to understand the world around us. It is integrally related to what we think science is and how it relates to practice. There is a lot at stake.” (W. Trochim, unpublished speech transcript, September 10, 2007)

“This issue is not a mere academic dispute, and should be treated as one involving the welfare of very many people, not just the egos of a few.” (Scriven, 2008, p. 24)

 Hot Tip: Epistemological politics (the ways in which power and privilege position some ways of knowing as ‘better’ and hierarchically ‘above’ other ways of knowing) are inextricably linked with ontological politics (whose reality counts, and how some reals are made to be more or less real, in practice, through various tacit or explicit power plays).

In the context of Cooperative Extension, and more specifically in the search for credible evidence about Extension, this nexus of epistemological and ontological politics raises the question: What is Extension?

For some (according to my research described here), it is a vehicle for dissemination of scientific information. For others, it is a site for grassroots knowledge sharing and deliberative democracy.

And, given that there appear to be (at least) a plurality of metanarratives about what Extension is, or (perhaps) an actual plurality of Extensions, the question then follows (playing on Robert Chambers’ influential title, Whose Reality Counts): Whose Extension counts?

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE AEA Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

I am Teresa McCoy, Assistant Director at the University of Maryland Extension (UME). I was hired 10 years ago with responsibility for evaluation and assessment across all program areas of agriculture, family and consumer sciences, environment and natural resources, and 4-H youth development. There was not a lot of evaluation knowledge and practice in the organization at that time, but there were many high expectations for what I could achieve.

In a situation of n=1 (me), I had to get started on evaluation capacity building (ECB). The critical decision was how to go about that work. Fortunately for me, four circumstances came together to point the way. First, the publication of a special winter issue of New Directions in Evaluation (NDE), “Program Evaluation in a Complex Organizational System: Lessons from Cooperative Extension,” provided much guidance to me.

Rad Resources:

Second, I am fortunate to be married to a software developer who has deep expertise in human interaction design and agile and lean software development. Over the years, I peripherally absorbed some knowledge and listened to stories from his industry. I realized my strategy to build evaluation capacity in UME had to include one of the principles of agile management: “Build projects around motivated individuals. Give them the environment and support they need and trust them to get the job done” (Agile Alliance, nd).

Rad Resources:

  • Read the Agile Manifesto Principles at https://www.agilealliance.org/agile101/
  • Want more in-depth information? Read Learning Agile: Understanding Scrum, XP, Lean, and Kanban by Andrew Stellman & Jennifer Greene

Thirdly, soon after starting in the position, I began working with an agriculture extension educator who had a natural inclination for evaluation work. (In full disclosure, her area of expertise is agricultural economics.) A change in some of her position responsibilities gave us the opportunity to develop an official agreement that she would receive a small administrative stipend for taking on evaluation leadership in agriculture. With that guidance, many of our new Extension Educators enter the job with enthusiasm for evaluation as part of their work.

Hot Tip:

  • The fourth and final circumstance, and the most critical, was that our Program Leaders/Assistant Directors supported me and the effort to build evaluation capacity. Without their support and leadership, building capacity would have been difficult, if not impossible.

About four years ago, UME added a Coordinator of Program Development and Evaluation (PD&E) to my evaluation department. Now, n=2, plus the rest of the capacity built within the program areas. I like to call my approach “Agile and Lean Evaluation Capacity Building” (AL-ECB).

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE AEA Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Chris Okafor and I am a Research for Development Coordinator at the International Institute of Tropical Agriculture (IITA), Bukavu, DRCongo (DRC). I would like to briefly share a tool we are using to assess farmers’ knowledge gain during training.  I am currently coordinating an integrated crops/livestock development project in DRCongo, a position which brings me into contact with small holder farmers. In addition to my doctoral specialization in program evaluation, I have a range of professional experience in capacity building, participatory extension and project management.

During my work with farmers, especially farmer training using the farmer field school (FFS) approach developed by Food and Agriculture Organization (FAO), a tool called Ballot Box Test was used to assess and discuss farmers’ knowledge of good agricultural practices before and after training and determine areas of priority attention. The tool was adapted to assess farmer ability to use a mobile phone to seek agricultural advisory services and market information from extension workers and clients.

In South Kivu Province of DRC, we took 107 farmers including 49 women beneficiaries of the Crops Livestock Integration Project (CLiP), invited for training in Information and Communication Technology (ICT), through the ballot process. The test consisted of 10 questions, each with three answer options (boxes). Each participant has a unique number written on 10 voting cards (one per question). The ballot papers could have the name of the participants on them. However, this is discouraged to avoid turning it into an examination for the participants. Generally, we have observed that the adults tend to shy away from examinations. Participants go through questions in an orderly manner. At the end of “balloting,” the results were tabulated and discussed in a participatory and interactive manner that helped to identify areas of priority attention. Some were coopted as facilitators because of their level of knowledge.  The exercise was conducted again at the end of the training to determine knowledge gain.

The Ballot Box method helped us document knowledge change. The pre-training exercise revealed that approximately 74% of farmers knew and understood basic features of mobile phones including saving contacts, placing a call and sending SMS. The post-training ballot exercise recorded an increase of 9%, to 83%. Less than 10% of the farmers knew they could use mobile phones to seek agricultural advisory services including veterinary services and pass on or receive market information. That percentage increased to 85% after training.

Hot Tip: Besides establishing gaps in farmers’ knowledge, the Ballot Box Test is a farmer-friendly tool that can be used to stimulate farmer interest in learning, increase what they know and encourage sharing of knowledge.

Rad Resource: Here is another example of how the Ballot Box has been used by other groups: http://ffs.ipm-info.org/ballot-box-test/

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE AEA Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Chelsea Hetherington and I’m an Evaluation Specialist with Michigan State University Extension. One of the many benefits of Extension work is that we reach people in local communities all around the country, but we also have the connections and resources of a big university. Several MSU Extension youth programs specifically focus on equipping youth with the skills and knowledge they need to be successful in college. For some programs, like 4-H Exploration Days, youth come to MSU’s campus for several days and stay in the dorms, attending sessions on different topics of interest and experiencing college life. In other programs, like 4-H Great Lakes and Natural Resources Camp, youth spend a week learning about environmental science, like fisheries and wildlife, ecology, and forestry, while also exploring careers and connecting with experts in these fields.

Post-event evaluations show that these programs equip youth with important college readiness skills, like time management, independence, collaboration, and increased knowledge of career fields and college majors. Still, as pre-college programs, a primary goal for these programs is that participation will increase youth’s enrollment in college and subsequent degree attainment.

Rad Resource: National Student Clearinghouse has been an incredible resource for tracking past pre-college program participants. For a small fee, we submit records of our past program participants to National Student Clearinghouse on an annual basis. National Student Clearinghouse returns reports that tell us how many program alums are enrolled in college, as well as what schools they attend. We also re-submit records 6 years after youth graduated high school to get reports on whether they’ve earned a college degree in that time. This data has been instrumental in demonstrating the value of our programs – we can show that alumni of Michigan 4-H pre-college programs are more likely to enroll in college on time and earn college degrees.

Hot Tip: Many states publish statewide education data that reports residents’ rates of college enrollment and degree attainment, and this data can be used to compare those rates for pre-college program alumni. In Michigan, this data is maintained by the Michigan Department of Education and is published online at MISchoolData.org. Data can be parsed out in a number of different ways, which has allowed us to report comparison data by high school graduation year, by county, school district, and more.

Lesson Learned: Leveraging existing data sources can be a great way to present powerful data without needing to track down individual participants. Good record keeping is key in this process! Make sure you keep detailed, well organized records of past program participants, as well as important accompanying information (for us, this includes their date of birth as well as their county of residence).

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE AEA Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hello! My name is Mary Arnold, I am a professor and 4-H Youth Development Specialist at Oregon State University. In my Extension specialist role, I get to work in that liminal space between research and practice. In this space I spend a great deal of time translating research into effective youth development practice in 4-H programs, which lately has focused on the importance of developing programs that provide a nourishing developmental context for young people. By developmental context I mean the settings, experiences, and activities that 4-H programs provide to help youth to grow and thrive. This focus is important for two reasons, first because high quality programs lead to better outcomes, and second, because program quality is something that youth practitioners have a great deal of control over. If practitioners develop and implement high quality programs, my job as an evaluator is much easier.

As the field of youth development took hold in the 1990s, there was an immediate corresponding interest in measuring program outcomes. Unfortunately, outcomes were often measured without consideration of what really happened in the program. Fortunately, research about youth development programs is now increasingly focused on what happens in programs and how what happens leads to outcomes. As an evaluator, improving programs and articulating the theory of action that connects programs to outcomes increases the rigor of the evaluation findings. Program quality matters! And in the case of youth programs, the pieces that make up a developmental context are increasingly understood.

Some of these pieces include:

  • Helping youth find and enhance their “spark”
  • Surrounding youth with development relationships, that express care, challenge growth and share power
  • Paying attention to youth program quality principles such as belonging, safety, mattering, efficacy, and skill development
  • Engaging youth beyond mere attendance in programs to provide an experience that has depth, breadth and duration.

These ingredients are not unique to 4-H programs; any youth program focused on positive youth development can emphasize the same ingredients.

Hot Tip

Consider evaluating your program’s developmental context and use the results of the evaluation to work on program continuous program improvement. High quality programs lead to better outcomes for youth.

Rad Resources

 The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE AEA Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Greetings! We are Alda Norris, Evaluation Specialist at University of Alaska Fairbanks School of Natural Resources and Extension and Kendra Lewis, 4-H Evaluation Coordinator at University of California, Agriculture and Natural Resources. We are the Chair and Secretary, respectively, of the Extension Education Evaluation Topical Interest Group (EEE TIG). This TIG offers professional development for Extension professionals interested in and engaged in program evaluation. Our TIG is entering its 37th year!

This week, some of our members will be sharing their work with you, including tips on program quality from Mary Arnold; tracking college enrollment and degree attainment from Chelsea Hetherington; gathering farmer feedback from Chris Okafor; agile and lean evaluation capacity building from Teresa McCoy; and credible evidence from Tom Archibald.

If you’ve never visited a Cooperative Extension System (CES) office, know they exist across the U.S. and its territories. CES works to “extend” the research knowledge gained at land-grant universities to the public through fact sheets, workshops, consultations, maker spaces and beyond. Learn more from our federal partners at the National Institute of Food and Agriculture (NIFA). Our TIG also has members involved in the agricultural extension systems of other countries; you’ll read more about projects in the Democratic Republic of Congo in Chris’ post.

Lesson Learned: When you have been in an organization for a while, it’s easy to forget not everyone knows, or understands, its mission. CES is often called a “best kept secret” of community resources. Practice your elevator speech!

Hot Tip: Consider joining a TIG for networking. Who does similar or complementary work? Consult a list of members whose work has been highlighted in blogs or awards. Read about some of the past EEE TIG award winners and seek those folks out for a conversation at the annual conference and business meeting.

Hot Tip: Search for AEA affiliates and TIG pages on social media. Veteran evaluators often post helpful resources like evaluator Twitter handle lists. Make yourself a calendar invite to start spending time each week reading posts on discussion lists like EVALTALK.

Hot Tip: Look for other clearinghouses for sharing information and professional development in your discipline. Evaluators can browse the AEA library for past conference presentations, Coffee Break presentations, and other resources. In Extension, a great resource we have is eXtension.org.

Rad Resource: A great place to learn about the breadth of topics and current trends in any arena, including Extension, is to take a look at what folks are publishing about their work. Relevant journals for outreach include Journal of Extension and Journal of Human Sciences and Extension.

Rad Resource: Think beyond print. Listen to podcast interviews with authors from the recently published We’ve Tried That Before: 500 Years of Extension Wisdom.

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE AEA Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

My name is Cheryl Peters and I am the Evaluation Specialist for Michigan State University Extension, working across all program areas.

Measuring collective impact of agricultural programs in a state with diverse commodities is challenging. Many states have an abundance of natural resources like fresh water sources, minerals, and woodlands. Air, water and soil quality must be sustained while fruit, vegetable, crop, livestock and ornamental industries remain efficient in yields, quality and input costs.

Extension’s outreach and educational programs operate on different scales in each state of the nation: individual efforts, issue-focused work teams, and work groups based on commodity types. Program evaluation efforts contribute to statewide assessment reports demonstrating the value of Extension Agricultural programs, including public value. Having different program scales allows applied researchers to align to the same outcome indicators as program staff.

Hot Tip: Just as Extension education has multiple pieces (e.g., visits, meetings, factsheets, articles, demonstrations), program evaluation has multiple pieces (e.g., individual program evaluation about participant adoption practices, changes in a benchmark documented from a secondary source, and impact assessment from modeling or extrapolating estimates based on data collected from clientele).

Hot Tip:  All programs should generate evaluation data related to identified, standardized outcomes. What differs in the evaluation of agriculture programs is the evaluation design, including sample and calculation of values. Impact reports may be directed at commodity groups, legislature, farming groups, and constituents. State Extension agriculture outcomes can use the USDA impact metrics. Additionally, 2014 federal requirements for competitive funds now state that projects must demonstrate impact within a project period. Writing meaningful outcomes and impact statements continues to be a focus of USDA National Institute of Food and Agriculture (NIFA).

Hot Tip: Standardizing indictors into measurable units has made aggregation of statewide outcomes possible. Examples include pounds or tons of an agricultural commodity, dollars, acres, number of farms, and number of animal units. Units are then reported by the practice adopted. Dollars estimated by growers/farmers are extrapolated from research values or secondary data sources.

Hot Tip: Peer-learning with panels to demonstrate scales and types of evaluation with examples has been very successful. There are common issues and evaluation decisions across programming areas. Setting up formulas and spreadsheets for future data collection and sharing extrapolation values has been helpful to keep program evaluation efforts going. Surveying similar audiences with both outcomes and program needs assessment has also been valuable.

Rad resource: NIFA  provides answers to frequently asked questions such as when to use program logic models, how to report outcomes, and how logic models are part of evaluability assessments.  

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi! This is Laura Downey with Mississippi State University Extension Service. In my job as an evaluation specialist, I commonly receive requests to help colleagues develop a program logic model. I am always thankful when I receive such a request early in the program development process. So, I was delighted a few weeks ago when academic and community colleagues asked me to facilitate the development of a logic model for a grant proposing to use a community-based participatory research (CBPR) approach to evaluate a statewide health policy. For those of you who are not familiar with CBPR, it is a collaborative research approach designed to ensure participation by communities throughout the research process.

As I began to assemble resources to inform this group’s CBPR logic model, I discovered a Conceptual Logic Model for CBPR available on the University of New Mexico’s School of Medicine, Center for Participatory Research, website.


Clipped from http://fcm.unm.edu/cpr/cbpr_model.html

Rad Resource:

What looked like a simple conceptual logic model at first glance was actually a web-based tool complete with metrics and measures (instrument) to assess CBPR processes and outcomes. Over 50 instruments related to the most common concepts in CBPR, concepts such as organizational capacity; group relational dynamics; empowerment; and community capacity are profiled and available through this tool. The profile includes the instrument name; a link to original source; the number of items in the instrument; concept(s) original assessed; reliability; validity; and identification of the population created with.

With great ease, I was able to download surveys to measure those CBPR concepts in the logic model that were relevant to the group I was assisting. Given the policy-focus of that specific project, I explored those measures related to policy impact.

Hot Tip:

Even if you do not typically take a CBPR approach to program development, implementation, and/or evaluation, the CBPR Conceptual Logic Model website might have a resource relevant to your current or future evaluation work.

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi, I’m Siri Scott, and I work as a Graduate Assistant with the University of Minnesota Extension.

Conducting interviews with youth is one way to consider gathering information for program improvement, especially when you want to bring youth voice into your improvement process. While more time intensive than doing a survey, interviews will provide you with rich contextual information. Below is a brief overview of the planning process as well as tips for conducting interviews.

Hot Tips: Planning Phase

One main decision is whether or not you will need IRB approval for conducting interviews. Even when done for program improvement purposes, it is a good idea to comply with IRB regulations for data practices and protection for youth. Other major decision points in the planning process include how many individuals you will interview, how you will choose your participants, and how you will collect and analyze your data. In addition, you must decide what type of interview you want to conduct, the purpose of the interview, and then create an interview protocol (if appropriate).

Hot Tips: Conducting interviews

Here are some tips for conducting interviews:

  • Practice: Test the use of the protocol with a colleague (or a young person who you know well) and ask for feedback about the questions themselves and how they fit together.
  • Space: Find a quiet, secluded space for the interview in a public setting (or in a private home if the young person’s guardian can be present). You don’t want other people overhearing your conversation or your participant being distracted by anything.
  • Warm up: Start the interview with some informal chit chat. This will build your rapport with the participant and ease the participant’s (and your) nerves.
  • Probe: If you are not doing a structured interview, make sure to ask participants to clarify or elaborate on their responses. This will provide you with much better data.
  • Notes: If you are using an audio recorder, don’t trust it (fully). Jot down some quick notes during the interview. You can elaborate on these later if the audio recorder malfunctions.
  • Relax! If you mess up, that’s okay. Also, if you’re nervous and tense, the participant will sense that. Do whatever you can to put the participant (and yourself) at ease.
  • Learn More:  A good resource for learning about how to conduct interviews is this newly released, comprehensive overview of the interviewing process:  InterViews: Learning the Craft of Qualitative Research Interviewing (3rd Edition) by Brinkman and Kvale (2014).

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Howdy! I am Kevin Andrews, a program specialist at Texas A&M AgriLife Extension Service. In addition to my Extension duties, I co-teach a graduate evaluation course at Texas A&M University.

I came across a post from March about students partnering with community agencies to apply their evaluation skills. I’d like to build upon Dr. Brun’s idea for evaluators who have ties to a university, especially those in Extension.

Many of our students have no idea what extension (or any other agency) is. Any engaged university seeks to tie together the scholarships of teaching, research, and service, and hands-on evaluations are a perfect way to accomplish this.

Lessons Learned: By allowing students to partner with us on evaluations, they not only receive practical experience and make an impact, they also get to learn who we are. This can aid in recruiting talented students to work for the agency; we’ve had several ask about careers in extension.

Hot Tip: Students are going to ask a lot of questions. We can get pretty set in our ways and think we know our agency well. When you have to pause to explain why we do what we do in basic terms, you are forced to reflect on exactly why it is we have been doing things a certain way all these years!

Hot Tip: Our employees just want their voices heard. With students conducting interviews we get far more coverage than a single evaluator using a sample, and employees are able to feel their opinions matter. Our staff is also much more likely to be open with a student than they are a peer.

Lessons Learned: I like to be in total control over my projects, but part of delegating work is letting others do their own thing. By developing goals together early in the project, I can ensure the outcome is as I intended while allowing students to experiment and develop their own processes.

Hot Tip: Often, when a class is over, the student-teacher relationship ends. Keep contact information and follow up with students a year later to let them know the impact of their work. No matter where life takes them, they are your stakeholders and you want them to hold you in high esteem.

Lessons Learned: I’m lucky to get to straddle teaching and Extension. For those who don’t simply reach out and ask! I’ve been approached by others with projects for students, and I’ve approached others with projects of my own. Everyone has something they need done!

Two years ago, I was the student participating in a class evaluation. Three from my class, including myself, now work for Extension and our report generated $200,000 of funding – the model works!

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Older posts >>

Archives

To top