Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Search Results for: diluzio

RTD TIG Week: Linking Scientific Discoveries to Societal Needs by Shannon Griswold, Alexandra Medina-Borja, and Kostas Triantis

This week is sponsored by our colleagues in the Research, Technology, and Development Evaluation (RTD) TIG. The contributions this week are evergreen posts contributed by RTD TIG members about topics so important, they’re worth a second read. -Liz DiLuzio, Lead Curator We are Shannon L. Griswold, Ph.D., a scientific research evaluator and member of AEA’s …

RTD TIG Week: Linking Scientific Discoveries to Societal Needs by Shannon Griswold, Alexandra Medina-Borja, and Kostas Triantis Read More »

Best of AEA365: Approaching Document Review in a Systematic Way by Linda Cabral

Greetings AEA365 readers.  I’m Linda Cabral from the University of Massachusetts Medical School’s Center for Health Policy and Research. Many evaluations that I’ve been a part of in my 15+ year career have required a review of existing program documents. This has involved a range of documents such as program descriptions, meeting minutes, proposals and grantee reports. There can be many purposes to performing a document review. Often, it can provide you the background necessary to formulate your primary data collection tools. Other times, document review can be your sole data collection method when your evaluation only requires descriptive information such as number and type of sites or a description of participants and program costs. Funders appreciate this data collection method because it does not pose a burden to program staff as the data already exists. Regardless of the main purpose of your document review, I’ve found it helpful to be able to approach this type of review in a systematic way.

Using Tracer Methodology to Understand the Lived Experience of Diverse Individuals by Michael Valenti and April Wall-Parker

Hello! We are Michael Valenti, PhD, and April Wall-Parker, MS, from Pressley Ridge, a nonprofit social services agency. Our organization is committed to providing equitable services for all and has made it a strategic priority to ensure that our programs are safe and nurturing spaces for every individual. As evaluators, we analyze performance data for our program leaders to ensure that we are having the right impact on our communities. A few years ago, we began disaggregating our performance indicators by race and gender so our leadership teams can see how successful our programs are for diverse people.

Lessons Learned in Planning a Student-Led Evaluation Conference: Insights from the EViE Conference Planning Committee by Gabriel Keney

Hello AEA365 community! I’m Gabriel Keney, a Ph.D. student in the UNC Greensboro Educational Research Methodology Department (UNCG ERM) program evaluation track. Today, I would like to share some lessons I learned while collaboratively working with fellow students and faculty to plan a student-led evaluation conference.

Making the Most of Conference Opportunities: Insights from Emerging Evaluators by Stacy Huff and Tyler Clark

Hello! We are Stacy Huff and Tyler Clark, doctoral students in UNC Greensboro’s Educational Research Methodology (ERM) Department program evaluation track. We are also emerging evaluators who have learned a lot about the challenges of navigating the conference-presenting process. Today, we want to share what we’ve learned over the years, from our experiences with submitting proposals, presenting at conferences, and planning our own graduate-student led conference (EViE), and emphasize the importance of creating spaces for emerging evaluators.

Engaged and Empowering Evaluation: Leveraging the Expertise of Stakeholders within Non-Profit Evaluations by Tom Summerfelt

My name is Dr. Tom Summerfelt and I serve as the Chief Research Officer at Feeding America. This blog presents the current participatory, engaged approach that Feeding America is implementing to respond to this obstacle by shifting its evaluation staff from direct service to empowering, coaching, and mentoring program staff to integrate evaluation with their programming. As background, Feeding America is a two-tiered, federated network with a National Organization serving 200 food banks that partner with 60,000 community agencies to serve our neighbors dealing with food insecurity. In 2021, the Network distributed over 6 billion meals and served over 53 million individuals.

Centering Equity in a University-Based Evaluation Center by Liz Litzler, Erin Carll, and Emily Knaphus-Soran

Hi!  We are Liz Litzler, Erin Carll, and Emily Knaphus-Soran from the University of Washington Center for Evaluation & Research for STEM Equity. In honor of next week being University-Based Center TIG Week, today, we would like to share a little about how we center equity in our work as a University-Based Center. As a center that is focused on conducting high quality program evaluation and research to improve equity and broaden representation in Science, Technology, Engineering, and Mathematics fields, we feel like we are always thinking about equity- both in how we do our work and how we operate the center.

Picking the Best Tool for the Job: Qualtrics vs. REDCap by Madeleine deBlois and Rachel Leih

Hello! We’re Madeleine deBlois and Rachel Leih, from the Community Research, Evaluation & Development (CRED) Team at the University of Arizona (UA). CRED is a fun & funky group of folks at UA who come from a variety of backgrounds and work on a broad range of projects for UA departments, UA’s Cooperative Extension, and community partners.