AEA365 | A Tip-a-Day by and for Evaluators

CAT | Nonprofits and Foundations Evaluation

Hello, everyone!  I’m Claire Sterling, Program Co-Chair of the AEA Nonprofit & Foundations TIG and Director, Grant Strategies at the American Society for the Prevention of Cruelty to Animals (ASPCA), the oldest American animal welfare organization.  For nearly 150 years, the ASPCA has been saving and improving the lives of companion animals through a wide variety of approaches, with grants being officially added to our toolkit in 2008.  Although the ASPCA has a long history in New York City, its impact is also national, leveraged in part by grants to animal shelters, rescue groups, humane law enforcement agencies, sanctuaries, spay/neuter providers, and other organizations all across the country.  Last year alone, the ASPCA gave close to $17.5 million.

Hot Tip:  One of the many perks of my job (apart from having foster cats as co-workers) is having the opportunity to see things from the perspective of both a nonprofit and a foundation since, as a grantmaking public charity, the ASPCA is a hybrid of both.  But even if you work at an organization that is purely one or the other, this week’s AEA365 posts provide a glimmer of both perspectives as well. On behalf of the Nonprofit & Foundations TIG’s leadership, many thanks to this week’s contributors for their pearls of wisdom!

Lesson Learned:  As evaluators at nonprofits and foundations, we often find ourselves at the crossroads where the biggest challenges in direct-service and philanthropic work can converge in overwhelming ways:  urgent community needs that must be addressed with limited resources, mandates to operate with incomplete information, speedy priority shifts, and disconnects between theory and practice.  But as this week’s posts so succinctly demonstrate, where there’s a will, there’s always a way forward.

Rad Resource:  We hope these posts inspire conversations with your TIG peers at Evaluation 2014 October 15-18 in Denver.  Session information for our TIG’s track is now live.  There’s simply no substitute for face-to-face connection!

Rad Resource:  And speaking of good opportunities for connection, while you’re at Evaluation 2014, we hope you’ll attend the Nonprofit & Foundations TIG’s business meeting on Thursday, October 16 from 3:00-4:30pm, which will include a panel discussion for 2nd Edition of Empowerment Evaluationby David M. Fetterman, Shakeh J. Kaftarian, and Abraham Wandersman.  The book presents assessments by notable evaluators from academia, government, nonprofits, and foundations on how empowerment evaluation has evolved since the previous edition’s publication in 1996.

See you in Denver!

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

No tags

Hello from Patrick Germain! I am an internal evaluator, professor, blog writer, and the President of New York Consortium of Evaluators.  Working as a nonprofit internal evaluator teaches you a few things about evaluating with very few resources. Even as our sector gets better at using validated evidence for accountability and learning, the resources to support evaluative activities remain elusive.  I have written elsewhere about how nonprofits should be honest with funders about the true costs of meeting their evaluation requirements, but here I want to share some tips and resources for evaluators who are trying to meet higher evaluation expectations than they are receiving funding for.

Hot Tip #1: Don’t reinvent the wheel.

  1. Use existing data collection tools: ask your funder for tools that they might use or check out sites like PerformWell, OERL, The Urban Institute, or others that compile existing measurement instruments.
  2. The internet is your friend. Websites like surveymonkey, d3js (for fancy data viz), chandoo (for excel tips), and countless others have valuable tools and information that evaluators might find useful.  And places like Twitter or AEA365 help you stay on top of emerging resources and ideas.
  3. Modify existing forms or processes to collect data; this can be much more efficient than creating entirely new data collection processes.

Hot Tip #2: Use cheap or free labor.

  1. Look into colleges and universities to find student interns, classes that need team projects, or professors looking for research partners.
  2. Programs like ReServe and your local RSVP group place older adults who are looking to apply their professional skills to part time or volunteer opportunities.
  3. Crowdsourcing or outsourcing through websites like Skillsforchange, HelpFromHome, or Mechanical Turk, can be a cheap way of accomplishing some of the more mundane and time-consuming aspects of your projects.
  4. Organize or join a local hackathon, or find data analysts to volunteer time.

Hot Tip #3: Maximize the value of your efforts.

  1. Use resources allocated for evaluation as an opportunity to build the evaluation capacity of your organization – leverage your investment to help the organization improve its ability to conduct, participate in, and use evaluations.
  2. Focus your efforts on what is needed, be deliberate about eliminating as much unnecessary work as you can, and be very efficient with your time.

What other tools or resources do you use when you have limited resources?

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

My name is Trina Willard and I am the Principal of Knowledge Advisory Group, a small consulting firm that provides research and evaluation services to nonprofits, government agencies and small businesses. I’ve worked with a variety of nonprofit organizations over the years, many of which have limited staff and financial resources.

Such organizations sometimes have the opportunity to secure a small grant from a funder, awarded with good intentions to “nudge” their evaluation capacity in the right direction. These dollars may be adequate to create a measurement strategy or evaluation plan, but support is rarely provided for implementation. Consequently, many recipients leave these efforts with the feeling that they’ve accomplished little. So how do we effectively guide these organizations, but avoid leaving them in the frustrating position of being unable to take next steps? These three strategies have worked well for me in my consulting practice. 

Hot Tip #1: Discuss implementation capacity at the onset of measurement planning. Get leadership engaged and put the organization on notice early that the evaluation plan won’t implement itself. Help them identify an internal evaluation champion who will drive the process, provide oversight and monitor progress.

Hot Tip #2: Leave behind a process guide. Provide clear written guidance on how the organization should move forward with data collection. The guide should answer these questions, at a minimum:

  • Who is responsible for collecting the data?
  • What are the timelines for data collection?
  • How and where will the data be stored?
  • What does accountability for data collection look like?

Hot Tip #3: Create an analysis plan. Great data is useless if it sits in a drawer or languishes in a computer file, unanalyzed. Spend a few hours coaching your client on the key considerations for analysis, to include assigning responsibilities, recommended procedures, and how to find no/low-cost analysis resources.

Below are a few of our favorite go-to resources for small nonprofits that need support implementing evaluation strategies.

Rad Resources: Creating and Implementing a Data Collection Plan by Strengthening Nonprofits. Try this if you need a quick overview to share with staff.

Analyzing Outcome Information by The Urban Institute. This resource, referenced in the above-noted overview, digs into more details. Share it with the organization’s evaluation champion as a starting point to build analysis capacity.

Building Evaluation Capacity by Hallie Preskill and Darlene Russ-Eft. I’ve recommended this book before for nonprofits and it bears repeating. The tools, templates and exercises in the Collecting Evaluation Data and Analyzing Evaluation Data sections are particularly valuable for those that need implementation support.

What tips and resources do you use to prepare small nonprofits for implementing measurement strategies with limited resources?

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, I’m Kamilah Henderson, Evaluation Fellow at Skillman Foundation in Detroit. I work with Foundation staff and partners to create learning opportunities that inform the work of improving conditions for Detroit kids.

Skillman provided a social innovation grant to the Detroit Bus Company to develop the Youth Transit Alliance (YTA), creating a long-term transportation solution for youth in Southwest Detroit. YTA’s work has required nimbleness and creative agility to respond to shifts in the volatile ecosystem in which the project is embedded. As an internal evaluator, I used rapid learning to complement the spirit and energy of the YTA’s work to 1) highlight and track tangible changes in program strategy, 2), develop a rigorous data collection system, 3) surface solutions in a way that fosters continued mutual responsiveness and collaboration.

Lesson Learned:

Social innovators work fast solving seemingly intractable problems. Rapid learning allows foundations to match the pace of social innovators who need data to inform their swift responses to systems level changes.

Hot Tip #1: Demonstrate Values of Collaboration through Action. Developing evaluation relationships early in project planning ensures that rapid learning addresses the concerns of the grantee and Foundation. Starting with this value has made for stronger learning questions. As implementers of the work, YTA learned from the rapid learning cycles about moving key levers in systems change for kids, and Skillman’s Social Innovation team learned about providing technical assistance resources for core grantees.

Hot Tip #2: Use Tried and True Tools. Beverly Parsons developed a framework to assess program development as it moves toward sustainability and scaling. The framework helped me identify strategy changes the YTA employed during their pilot year. Parsons’ tool was beneficial in the absence of a logic model, which is sometimes the case with social innovation projects versus traditional nonprofit programs.

Hot Tip #3: Faster is Better. Instead of year-end reports, YTA has appreciated getting the results of data analyses within months so that they could more quickly shift the direction of their work toward better outcomes for kids. Skillman has valued learning as the work progresses rather than after a grant cycle has ended. Melanie Hwalek’s memo format is a helpful tool for presenting critical analyses without the long wait.

Rad Resource: Evaluating Social Innovation, by Preskill and Beer.

Rad Resource: The Real-Time Evaluation Memo, by Melanie Hwalek.

Rad Resource: Developing a Framework for Systems-Oriented Evaluation, by Beverly Parsons.

Get Involved: I would love to hear from others who are doing similar work. I will be presenting with a panel of colleagues at the AEA Conference. Please join Marie Colombo, Sara Plachta Elliott, Nancy Latham and me at Learning about Rapid Learning: Identifying Approaches that Increase Evaluation Use in System-Building.

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

I am Patti Patrizi an evaluation consultant working primarily with foundations helping them develop evaluation and learning systems. After working at The Pew Charitable Trusts, I founded The Evaluation Roundtable. My tip is an approach I used to help a large foundation develop a learning system that fosters internal learning about their strategies as an antidote to years of producing reports about results and outcomes.

Hot Tips:

  • Assessing the current reporting system: We used a modified “post action review” (http://www.fireleadership.gov/documents/Learning_AAR.pdf ) with a 16 person representative staff group asking them to describe their experience with their current system (this included asking about: audience, process, questions, actual use–by whom, gaps and positives) and to describe their hopes. The process took 2 meetings at 1.5 hours each.
  • Providing quick feedback: We quickly provided their comments back on a single Excel sheet sent to them for comments.
  • Plotting out the new system: Using the information, we generated a rough outline of the major elements of a new reporting system, which they reviewed in one group meeting, and then via email. We then selected four members of the larger group, to help detail mechanics, rules, and flows for the new system.
  • The core of the process: The system builds exchange between officers and their directors on each strategy. The exchange is teed up by responses to a set of questions developed to stimulated thinking and discussion on issues. Each officer writes a note; their director reads it, and convenes the group of officers working on a strategy, and then writes his/her own note.   Each note represents each person’s own perspective; there are no “corrections” in the process.   The group then meets with their program VP to discuss implications.
  • Developing good learning questions: The old system focused on listing accomplishments. The new system drives on questions that challenge officers to think critically about the strategy, and about why something happened or not. Using data of some kind (qualitative or quantitative) is a requirement. So as an example:

“Are you finding that you need to rethink the assumptions behind your theory of change, including:

  • Time needed to achieve outcomes envisioned
  • The extent of partnership and interest delivered by key stakeholders
  • Availability or nature of resources needed to make a difference
  • Levels of interest from external stakeholders—such as policy makers, NGOs etc.
  • Unanticipated changes in policy
  • The level of capacity that exists within relevant field/s in order to carry out the work, or as it relates to the key approaches
  • Other assumptions that have not materialized as you hoped?”

Last thought: This process will be only as good as the thinking it produces in the organization.

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, I’m Gretchen Shanks with the Bill and Melinda Gates Foundation’s Strategy, Measurement and Evaluation (SM&E) team. Our team works to ensure the foundation’s leadership, program teams and partners have the necessary capacity, tools and support to measure progress, to make decisions and to learn what works best to achieve our goals.

Before the Foundation, I supported teams at non-profits that were eager to evaluate their projects in the field; however, financial resources were inevitably scarce. Now that I work for a grant-maker that prioritizes the generation of evidence and of lessons about what works, what doesn’t and why, I think about issues of resourcing measurement and evaluation a bit differently.

In particular, I think less about whether we have enough financial resources in our budget for M&E and more about whether we have “enough” technical resources for measurement available (both to our internal teams and to our partners), or “enough” appropriately targeted and utilized evaluations. Some of the questions I ask about grantee evaluation include:

  • Are we investing sufficient resources, both time and technical support, in our work with partners to articulate the logical framework of measureable results for a project?
  • Have we adequately planned for measurement of those results and any possible evaluation(s)?
  • Do we know if we really need an evaluation, and if so, towards what end?
  • Does the design of the evaluation appropriately match the purpose and audience?
  • Do we know how (and by whom) the evaluation results will be used?

Planning results and measurement up front, supporting M&E implementation, and facilitating the use of data and lessons learned from evaluation all require resourcing – some financial, some technical, and (perhaps most importantly) temporal—the time needed from the relevant stakeholders (internal and external) is critical. As you likely know well from your own work, there are no magic solutions to these challenges. Here at the foundation we’re working on getting smarter about how to utilize scarce resources to support actionable measurement and evaluation.

Hot Tips: Here are a few examples of ways we’re tackling these challenges:

  • Check out this blog post by SM&E’s Director, Jodi Nelson. She introduces the foundation’s evaluation policy, which aims to “help staff and partners align on expectations and focus scarce evaluation resources where they are most likely to produce actionable evidence.”
  • Read this PowerPoint deck, which describes the foundation’s approach to designing grants with a focus on measureable results.
  • Listen to an Inside the Gates podcast to hear from NPR’s Kinsey Wilson and Dan Green, BMGF’s deputy director of strategic partnerships, as they discuss measurement in the field of media communications and some of the related challenges. (The segment runs from 8:55 to 15:38.)

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! I am Laura Beals and I am an internal evaluator at Jewish Family and Children’s Service (JF&CS). JF&CS is a large nonprofit social service agency located just outside of Boston, MA. As we are surrounded by 58 institutions of higher learning, we have many opportunities to leverage students and faculty in order to increase our evaluation capacity.

At Evaluation 2013, I presented with Jennifer Lowe, Director of Research at Crittenon Women’s Union, about how our organizations handle collaborations with students and faculty:

Beals 1

In this post, I am focusing on the process we have at JF&CS for evaluating (pun intended!) requests that come into the agency from students and faculty.

Lessons Learned:

  • Requests for research can come in many forms; examples of requests include:
    • Students or faculty requesting staff assistance to distribute flyers or notify agency clients of the availability of a research project.
    • Students requesting access to agency staff or clients to observe a program session, distribute a survey, or take part in an interview for a class project.
    • Students or researchers asking for access to agency administrative or managerial staff to study nonprofit management practices.
    • Academic researchers wishing to gain access to client data or to current clients for scholarly research governed by an Institutional Review Board (IRB).
    • Practitioners or researchers wishing to gain access to clients to validate a clinical evaluative tool.
    • Nonprofit collaborations with students or faculty can have many benefits; for example:
      • Capacity: Tackle a project that might have been on the back burner
      • Knowledge: Learn more about the nonprofit and the field in which it operates
      • Inspiration: Be inspired by the methods or tools used
      • Reputation: Improve the nonprofit’s reputation through collaborations with well-known researchers or universities
      • And can also have many risks, for example:
        • Ethics: Clients may not be treated in an ethnical manner
        • Privacy: Client data privacy may be breached (e.g., HIPPA)
        • Resources: Staff or facilities may be overburdened
        • Reputation: The nonprofit’s reputation may be damaged through bad collaborations

Hot Tips:

  • Research requests from students or faculty can be a great way to increase nonprofit evaluation capacity, but there should be a process in place for reviewing and approving to ensure that the benefits outweigh the risks.
  • At JF&CS, students or faculty with a research request must complete an application; all requests for research are then reviewed by our internal evaluation and research department. Depending on the level of risk, varying levels of senior leadership need to approve.

Rad Resource: The handout from this session, which details this process in more detail, is available here in the AEA Public elibrary.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello! We are Johanna Morariu, Kat Athanasiades, and Ann Emery from Innovation Network. For 20 years, Innovation Network has helped nonprofits and foundations evaluate and learn from their work.

In 2010, Innovation Network set out to answer a question that was previously unaddressed in the evaluation field—what is the state of nonprofit evaluation practice and capacity?—and initiated the first iteration of the State of Evaluation project. In 2012 we launched the second installment of the State of Evaluation project. A total of 546 representatives of 501(c)3 nonprofit organizations nationwide responded to our 2012 survey.

Lessons Learned–So what’s the state of evaluation among nonprofits? Here are the top ten highlights from our research:

1. 90% of nonprofits evaluated some part of their work in the past year. However, only 28% of nonprofits exhibit what we feel are promising capacities and behaviors to meaningfully engage in evaluation.

2. The use of qualitative practices (e.g. case studies, focus groups, and interviews—used by fewer than 50% of organizations) has increased, though quantitative practices (e.g. compiling statistics, feedback forms, and internal tracking forms—used by more than 50% of organizations) still reign supreme.

3. 18% of nonprofits had a full-time employee dedicated to evaluation.

Morariu graphic 1

4. Organizations were positive about working with external evaluators: 69% rated the experience as excellent or good.

5. 100% of organizations that engaged in evaluation used their findings.

Morariu graphic 2

6. Large and small organizations faced different barriers to evaluation: 28% of large organizations named “funders asking you to report on the wrong data” as a barrier, compared to 12% overall.

7. 82% of nonprofits believe that discussing evaluation results with funders is useful.

8. 10% of nonprofits felt that you don’t need evaluation to know that your organization’s approach is working.

9. Evaluation is a low priority among nonprofits: it was ranked second to last in a list of 10 priorities, only coming ahead of research.

10. Among both funders and nonprofits, the primary audience of evaluation results is internal: for nonprofits, it is the CEO/ED/management, and for funders, it is the Board of Directors.

Rad Resource—The State of Evaluation 2010 and 2012 reports are available online at for your reading pleasure.

Rad Resource—What are evaluators saying about the State of Evaluation 2012 data? Look no further! You can see examples here by Matt Forti and Tom Kelly.

Rad Resource—Measuring evaluation in the social sector: Check out the Center for Effective Philanthropy’s 2012 Room for Improvement and New Philanthropy Capital’s 2012 Making an Impact.

Hot Tip—Want to discuss the State of Evaluation? Leave a comment below, or tweet us (@InnoNet_Eval) using #SOE2012!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · ·

Welcome to the Evaluation 2013 Conference Local Arrangements Working Group (LAWG) week on aea365. I’m Will Fenn from Innovation Network a Washington, D.C.-based monitoring and evaluation consulting firm specializing in advocacy and policy change evaluation.

There is an all too common situation that arises around evaluation — I’ll call it the evaluation merry-go-round. I saw this situation many times as a foundation program officer. Now that I work in a role fully focused on evaluation, my goal for the “State of Evaluation Practice” is to help organizations avoid the merry-go-round and promote evaluation that embraces data-based decisions and learning.

Lesson Learned—Let me explain how the evaluation merry-go-round often starts: A funder recognizes the importance of evaluation at a board meeting and approaches its grantees requesting data for an evaluation in the coming year. If the grantee has good data, evaluation moves along happily for both parties. But often resources are tight for grantees and they were not able to capture good quality data even if they are doing great work. The grantee offers what they have, the funder may then question the data, accept the incomplete data, or perform their own data collection with the evaluation conclusion sent to the board.

The process is often uncomfortable for both sides and too often leaves grantees in no better position to improve operations through data informed decision-making. In other words, funder and grantee go up and down through the evaluation process but the ride ends with the organization in the same place.

Hot Tip: My experience is that the same scenario can play-out successfully when funders and grantees cooperate, plan, and invest from the earliest stage to build capacity before the evaluation. A high level of engagement and planning from both parties is essential; and additional resources in terms of funding and expertise are highly recommended. Remember, data is not king; it only helps one ask the right questions. There is no substitute for investing time to understand the context around the data and to know which data to consider.

Rad Resources:

The Stanford Social Innovation Review article, Counting Aloud Together, shows an example of how to build evaluation capacity.

The Collaborative Evaluation” section in the Learning from Silicon Valleyarticlealso offers great tips from the Omidyar Network’s experience on collaborative evaluation.

Also check out Innovation Network’s guide to evaluation capacity building.

saloonHot Tip—Insider’s advice for Evaluation 2013 in DC: For a quiet place to reflect on the day’s events visit the Saloon at 1205 U Street, NW. The bar’s mission is to promote conversation, so it is free of TV screens and offers large communal tables upstairs.

We’re thinking forward to October and the Evaluation 2013 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). AEA is accepting proposals to present at Evaluation 2013 through until March 15 via the conference website. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

No tags

Greetings! I’m Ann Emery from Innovation Network in Washington, DC.

Rad Resource – Emery Evaluation: Like many evaluators, I wear several hats – full-time evaluation consultant, part-time research methods graduate student, and 24/7 data nerd. My blog weaves these roles together:

  • I blog about my adventures as a nonprofit and foundations evaluator.
  • I share data files and preliminary results from research projects, like this evaluation use survey.
  • I’ve collected guest posts from more than 15 colleagues.
  • I’ve created more than 30 video tutorials, Excel for Evaluators.

Hot Tips – favorite posts: My most popular posts share resources, start conversations, and tell stories:

Lessons Learned – what I’ve learned: Want to write a great blog post? The best posts are short and sweet (not a full manifesto); contain photos, graphs, links, or embedded videos; and end with discussion questions so readers can respond with their own ideas.

Lessons Learned – why I blog: My reasons have evolved over time. I was initially inspired by Chris Lysy’s Ignite presentation about why evaluators should blog and the 2011 Bloggers Series on aea365. And, I simply needed more space – I couldn’t fully express myself in 140-character tweets @AnnKEmery any longer! Now, I blog to educate other evaluators (through my tutorials) and to educate myself (by collecting guest posts from different viewpoints).

Lessons Learned – why you should blog: Blogging makes you a better communicator (and, therefore, a better evaluator). I’ve also talked to evaluators whose blogs have led to invitations to write magazine articles, join committees, participate in podcasts, speak on panels, and turn their blog posts into a published book. Who knew that 400 words could open so many doors?

Lessons Learned – hesitant to start blogging? Most evaluators are concerned that blogging will be time-consuming. So, I conducted some super-serious research to test this hypothesis. Results indicate that, yes, it takes one million hours to write your first blog post. But, with practice, you’ll be writing blog posts in an hour or less. Stick with it!

emery2

This winter, we’re continuing our series highlighting evaluators who blog. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Older posts >>

Archives

To top