AEA365 | A Tip-a-Day by and for Evaluators

CAT | Uncategorized

Hi, I am Paula Egelson, research director at the Southern Regional Education Board in Atlanta. For this week of the AEA 365 blogs, board members from the Consortium for Research on Educational Assessment and Teaching Effectiveness (CREATE) will be sharing blogs associated with the tenants of the CREATE organization: assessment, teacher and principal effectiveness, program evaluation and accountability.

CREATE has a long history with the Joint Committee on Standards for Educational Evaluation (JCSEE) that has been housed over the years at Western Michigan University’s Evaluation Center, University of Iowa and Appalachian State University. Begun in 1974, JCSEE members representing research and practitioner organizations both nationally and internationally have created and revised the Program Evaluation Standards, Personnel Evaluation Standards and Classroom Assessment Standards.

Hot Tips:

Our focus today is on the Program Evaluation Standards and some of its uses. The Program Evaluation Standards apply to a wide variety of settings in which learning takes place. This includes schools and universities to nonprofits and the military. These 30 standards are organized around the five key attributes of utility, feasibility, propriety, accuracy and accountability. Each attribute includes the key concepts related to it, standards statements implementation suggestions, hazards to avoid, case narratives and references for further reading.

The program evaluation standards provide guidance and support reflective practice associated with:

  • Whether and when to evaluate,
  • How to select evaluators and other experts,
  • The impact of cultures, contexts and politics,
  • Communication and stakeholder engagement,
  • Technical issues in planning, designing and managing evaluations,
  • Uses and misuses of evaluations,
  • Issues related to evaluation quality, improvement and accountability.

Among other things, program evaluation standards can help evaluators resolve some common evaluation issues that are found below:

  • Stakeholder over-involvement in the evaluation,
  • Agency disagreement over the evaluation recommendations,
  • A contractor desiring an evaluation report with predetermined outcomes,
  • Stakeholders “sitting on” an evaluation report, and
  • A lack of data collection integrity (lack of timeliness related to data collection, supervisor review of employees’ survey responses, teachers reviewing an online test or survey before the administration begins, not following random sampling guidelines).

I encourage you take an opportunity to access The Program Evaluation Standards to determine how these standards can be of best use to you and your colleagues. I look forward to hearing from you about your uses of the standard and obtaining your feedback of the standards.

Rad Resources:

Detailed information about the Program Evaluation Standards

Information about the work of the Joint Committee on Standards for Educational Evaluation

For more information about CREATE, please go to www.createconference.org. CREATE’s annual research and evaluation conference will take place at William and Mary College in Williamsburg, Virginia, on October 11 and 12, 2018. We hope to see you there!

 

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

I’m Tom Chapel. I’m a current AEA Board member and also serve in a senior evaluation position with a Federal agency.  I’ve been consulting on evaluation and how to incent evaluation capacity in all types of organizations for several decades. I’m not sure that setting evaluation policy in general is significantly different than setting evaluation policy in any large organization, and I hope these hints below will resonate across sectors.  Here are some tips to devising, implementing, and putting an effective evaluation policy to use, based on my primarily public sector experience.

Hot Tips:

Be flexible about designs.  Unless all evaluations in the organization are done to demonstrate impact using a control group, then be clear in the policy that evaluation designs can vary with the situation.  A process evaluation does not require the rigor of an impact evaluation.  High-level evaluations for management feedback may benefit as much from a dashboard of performance measures as from a full-on research design.  Regardless, all of it is evaluation.

Commit yourself to continuous program improvement. A useful policy makes clear that evaluation plays a role at all stages of program development, even if the key evaluation questions and methods vary over the life of the program.

Expect pushback. There are many reasons programs resist evaluation—policy or not—besides the fear of exposing program failure. Diverting resources from the program, the long timeframe for results, and, sometimes, the lack of external validity are but a few.  But if you’re faithful to the first few steps in designing and implementing your policy—designs that match the situation and commitment to an evaluation that yields useful results—then most of the reasons to resist or “game” the policy disappear.

Be  “high touch”—the usefulness of technical assistance.  No one likes unfunded or neglected mandates.  An evaluation policy that comes without resources or technical assistance is unlikely to take hold.  Evaluation is not nearly as hard as people make it, and the purpose of technical assistance is both about keeping people from overkilling (too much attention) or underkilling (not enough attention).  Coaching them helps them get on track from the start.

Look for “process use” wins.   Policy gets evaluation in the door.  But an evaluation process that provides clarity or uncovers inconsistent or logical gaps in the program design or theory of change is often the “aha” needed to sell evaluation to the skeptical. It should be no surprise that the biggest added value of evaluation may come before the data are collected.

Standardize your terms.  Nothing undermines evaluation, performance measurement, and even strategic planning than inconsistency in definition of terms.  Whether you require that people adopt your definition or not, establish definitions for how key terms—input, output, outcome, impact, indicator, and measure—will be used in the policy.

Setting organizational policy of any kind is hard.  When that policy requires more resources or a shift in traditional practices, it becomes all the harder.  But paying attention to these six tips might pave the road to success.

The American Evaluation Association is celebrating AEA’s Evaluation Policy Task Force (EPTF) week. The contributions all this week to aea365 come from members of AEA’s EPTF. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I’m Nick Hart, the current chair of the EPTF and Director of the Bipartisan Policy Center’s Evidence-Based Policymaking Initiative. In 2016, Congress and the president established a federal Commission on Evidence-Based Policymaking that studied how to improve government’s infrastructure for evidence building, and offered a series of recommendations about strengthening government’s evidence capacity, among other others.

While the Commission deliberated in 2016 and 2017, AEA provided direct input about evaluation policies the Commission could consider, including testimony from former EPTF Chair George Grob. When the Commission issued its final report in 2017, it included several notable recommendations related to evaluation, such as establishing chief evaluation officers in federal agencies and creating learning agendas, which are strategic plans for research and evaluation.

In the months following the Commission’s report, AEA co-hosted a forum with other professional associations to discuss implementation of the recommendations and applauded the commission’s goal to institutionalize the evaluation function in government, joining more than 100 organizations in backing aspects of the Commission recommendations.

While it’s one thing for a policy Commission to issue recommendations, it’s another to see those recommendations become reality. Here’s a quick snapshot of what has happened related to  implementation:

  • Foundations for Evidence-Based Policymaking Act. In October 2017, House Speaker Paul Ryan (R) and Senator Patty Murray (D), who also championed the creation of the Commission, co-filed HR 4174 that would require major federal agencies to identify and designate chief evaluation officers, and to establish learning agendas. The legislation also includes a number of provisions that affect data availability and privacy protections that would impact evaluators. The legislation quickly moved unanimously through the House of Representatives in November 2017, and is currently awaiting Senate action, expected sometime in 2018.
  • President’s Management Agenda. In May 2018, the White House Office of Management and Budget announced a new plan for improving how government operates. The plan includes a priority goal of improving how government uses data, including for evaluation activities. In the coming months the White House will provide additional details about how the plans will affect decision-making and accountability, including through the use of learning agendas recommended by the commission.

The Commission’s recommendations could lead to changes in how government handles evaluation policy moving forward. If the legislation becomes law or the President’s Management Agenda includes directives for all agencies, numerous opportunities will emerge for ongoing engagement with the Federal government to help shape, improve, or in some cases establish the evaluation function within government.

Rad Resource: Read and learn what the Commission said about evaluation in its final report.

The American Evaluation Association is celebrating AEA’s Evaluation Policy Task Force (EPTF) week. The contributions all this week to aea365 come from members of AEA’s EPTF. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Federal evaluation policies have the potential to affect the direction of the evaluation field, the implementation of our craft, and ultimately the interpretation of how well policies and programs are implemented and achieve their intended goals. Recognizing the growing dialogue about the evaluation function within the federal government, the American Evaluation Association established the Evaluation Policy Task Force (EPTF) in 2007.

For more than a decade, an all-volunteer task force has made recommendations to the AEA Board to positively influence and provide strategic advice to policymakers about how to most effectively shape evaluation policies. I’m Nick Hart, the current chair of the EPTF. I am joined by Katrina Bledsoe, Tom Chapel, Katherine Dawes, Diana Epstein, George Julnes, Mel Mark, Kathryn Newcomer, Demetra Nightingale, and Stephanie Shipman.

The EPTF’s charge focuses on evaluation policy. Consistent with that charge, the Task Force’s efforts cover policy issues related to evaluation definitions requirements, methods, resources, implementation, and ethics. This focus has led to a number of accomplishments this past decade.

  • Evaluation Roadmap. In 2013, the AEA members approved the EPTF developed document called An Evaluation Roadmap for a More Effective Government. The document has been instrumental in shaping AEA’s approach to improving federal evaluation policy and for implementing the evaluation function throughout government, and is widely cited in government policy publications.
  • Federal Evaluation Policies. Recently, numerous federal departments and agencies have developed evaluation policies. EPTF members contributed to development of policies at the State Department and the United States Agency for International Development to encourage alignment with AEA’s policies and the Evaluation Roadmap.
  • In 2013, EPTF launched a partnership with Washington Evaluators to encourage AEA members to visit Capitol Hill to discuss the evaluation with Members of Congress and their staff. During the 2013 event, 69 AEA members from 31 states participated. When the conference returned to Washington, DC in 2017, 80 members from 35 states participated.
  • Evidence Commission. In 2016 and 2017, the Task Force provided input on the deliberative activities of the U.S. Commission on Evidence-Based Policymaking, including testimony that helped shape the final recommendations. AEA applauded the commission’s recommendations to institutionalize the evaluation function in government.

There’s much for AEA members to be proud of in the organization’s ability to help shape evaluation policy. During EPTF week on AEA365, we’ll highlight opportunities of the future. On Tuesday, Stephanie Shipman discusses how members can comment on revisions to AEA’s Evaluation Roadmap. On Wednesday, Demetra Nightingale discusses how members can provide feedback directly to federal agencies on evaluation policies. On Thursday, I offer some insights about the recent Commission on Evidence-Based Policymaking and the implications for the evaluation field.  On Friday, Tom Chapel offers an overview of important evaluation policy themes in the public sector. Finally, on Saturday, Katrina Bledsoe highlights the intersection of policy with philanthropy.

Moving forward, I encourage you to let the Task Force and AEA Board know if there are evaluation policy concerns or issues you would like AEA to focus on through AEA’s Issues and Ideas Portal.

 

The American Evaluation Association is celebrating AEA’s Evaluation Policy Task Force (EPTF) week. The contributions all this week to aea365 come from members of AEA’s EPTF. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello aea365 readers! I’m Sheila B Robinson, aea365 Lead Curator and sometimes Saturday contributor. This past week, I taught courses at AEA’s Summer Evaluation Institute in Atlanta, GA. One of my courses, It’s Not the Plan, it’s the Planning: Strategies for Evaluation Plans and Planning fills up every year. In this course, participants learn:

  • How to identify types of evaluation activities (e.g. questions to ask of potential clients) that comprise evaluation planning
  • Potential components of a comprehensive evaluation plan
  • How to identify key considerations for evaluation planning (e.g. client needs, collaboration, procedures, agreements, etc.)

What isn’t in the curriulum for this course, is the answer to a couple of questions participants frequently ask at the end:

  • How do I engage stakeholders in the evaluation?
  • How do I get buy-in from stakeholders?
  • How can I get stakeholders to value the evaluation?

Lesson Learned:

My best advice for stakeholder engagement and obtaining buy-in for evaluation is to engage early and often. Meet with people, share information about the evaluation and how it’s going. Offer intermediate reports, even if you have only a little to report. Most importantly, meet people where they are: If stakeholders are concerned with bottom line dollars and cents, talk with them about that. If they’re concerned about the impact on the target population, share what and how beneficiaries are doing in the program. In other words, tailor your interactions with stakeholders or presentations to them to align with their specific areas of concern and interest and connect on both an emotional and intellectual level. Evaluation is not just about data. It’s about people. Successful stakeholder engagement cannot be achieved with a one-size-fits-all approach! For more specifics, here are a few helpful resources.

Rad Resources:

Get Involved:

I’m certain our readers would appreciate more on this topic. What are your best strategies for stakeholder engagement? Please offer them in the comments, or better yet, contribute a blog article on the topic!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hello AEA members! My name is Miki Tsukamoto and I am a Monitoring and Evaluation Coordinator at the International Federation of Red Cross and Red Crescent Societies (IFRC).

Video has proved to be a useful data collection tool to engage communities to share their feedback on the impact of projects and programmes in the International Federation of Red Cross and Red Crescent Societies (IFRC).[1] In an effort to develop more efficient and inclusive approaches to monitoring projects, IFRC’s Planning, Monitoring, Evaluation and Reporting (PMER) Unit in Geneva, in cooperation with Newcastle University’s Open Lab and in coordination with the Indonesian Red Cross Society (Palang Merah Indonesia-PMI) and IFRC Jakarta, piloted an initiative using the Most Significant Change approach facilitated by a mobile video application (app) called “Our Story,” adapted from the Bootlegger app, in the community of Tumbit Melayu in 2017. Stories were planned, collected, directed and edited by women, men, youth and elderly of the community through this “one stop shop” mobile application. The subject was to gather feedback on a water, sanitation and hygiene promotion (WASH) project being implemented by the Indonesian Red Cross Society (Palang Merah Indonesia-PMI) with the support of IFRC in the district of Berau, East Kalimantan province. Costs of this pilot project were minimal, as the app allows video data collection to be done without having to continuously rely on external expertise or expensive equipment.

Our Story: Women’s feedback on a WASH project in Berau, Indonesia


Our Story: Elderly’s feedback on a WASH project in Berau, Indonesia

Our Story: Youth’s feedback on a WASH project in Berau

Our Story: Men’s feedback on a WASH project in Berau

Our Story: Community’s feedback on a WASH project in Berau, Indonesia

Lessons Learned:

  • Data collection: When collecting disaggregated data, it is important that facilitators be flexible and respect the rhythm of each community group, including their schedules and availability.
  • Community needs: By collecting stories from representative groups from the community, it provides an opportunity for organizations to dive deeper into the wishes of the community and therefore better understand and address their varying specific needs.
  • Our Story app: The community welcomed this new tool as it was an app that facilitated the planning, capturing and creation of their story on a mobile device. This process can be empowering for an individual and/or group, and serve to increase their interest and future participation in IFRC and/or National Society-led projects.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

[1] Recent participatory video initiatives produced by communities receiving assistance from IFRC and/or National Society projects can be found at: https://www.youtube.com/playlist?list=PLrI6tpZ6pQmQZsKuQl6n4ELEdiVqHGFw2

· · ·

Beverly Peters

Beverly Peters

Greetings! I am Beverly Peters, an assistant professor of Measurement and Evaluation at American University. I have over 25 years of experience teaching, researching, and designing, implementing, and evaluating community development and governance projects, mainly in southern Africa.

This year’s AEA Conference theme, Speaking Truth to Power, addresses the heart of a concern that I have considered for years. I have found from my work in Africa that as an evaluator, by nature I leverage a certain amount of unwelcome power in my interactions with stakeholders. I have spent more than two decades asking how I can better understand that power, and mitigate it so that I can hear the truth from stakeholders.

I realized this power of the evaluator, first when I was conducting my PhD dissertation research in two villages in South Africa, and later as I continued microcredit work in the region. Issues of racial and economic privilege permeated my work in an environment emerging from more than four decades of apartheid. How could I ensure that stakeholders would not be silenced by that power? How could I ensure that the messages that stakeholders gave me were not distorted? While working on microcredit projects, I used ethnographic research methods and intercultural communication skills to break down power relationships. Although it was time consuming, ethnographic story telling helped to give my work perspective, and rural villagers voice.

The position of power and privilege has a host of facets to consider, some of which are not easily addressed. Many of these are related to the nature of the evaluator/stakeholder relationship, as I saw in my early work in South Africa. For years since then, I have also recognized that who I am as a person and an evaluator—my gender, age, nationality, and race, just to name a few attributes—impacts the data that I collect and the data to which I have access. This position of privilege, together with the attributes from above, can prevent evaluators from speaking truth to power.

Hot Tips:

How can I begin to break down this unwelcome position of privilege and address these inherent challenges, so that I can find ways to speak truth to power?

  • Keep a personal journal during every project. This will help you to be self reflective of who you are as a person and an evaluator, and help to identify how data might be impacted.
  • Craft a strong Evaluation Statement of Work that guides the evaluation and anticipates power relationships in the evaluand.
  • Secure a diverse evaluation team that includes local experts that will contribute to data collection, dialogue, and understanding.
  • Develop intercultural communication skills and use qualitative data collection techniques to uncover the emic, or insider, values of the stakeholder population.

My experiences have shown that being self reflective, having a strong evaluation plan and a diverse evaluation team, and collecting emic data can go a long way in identifying, understanding, and presenting insider values that can challenge the bonds of power over time.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Martha Brown, president of RJAE Consulting. This blog sheds light on the need to Speak Truth to Power (STTP) in AEA face-to-face and virtual spaces when racism, male supremacy, and other oppressive forces act to silence others. How do AEA members silence others? Here are two examples.

First, soon after subscribing to EVALTalk in 2016, I noticed sexism, misogyny and racism frequently present in the discussion threads. For instance, an African evaluator commented that requests for assistance and information made by African evaluators are often ignored. Many people were upset and sought to remedy the situation in various ways. A few men entered the conversation, exercising white male privilege in full force. First, they denied that racism was the problem. Worse yet, one man blamed the African evaluator for not doing more to be heard. According to Jones and Okum, a symptom of white supremacy culture is “to blame the person for raising the issue rather than to look at the issue which is actually causing the problem.”  Yet so many of us stood by and said nothing.

At Evaluation 2017, I attended what was supposed to be a panel presentation by three women. However, for the first 10 minutes, all we heard was the lone voice of a man in the front row who seemed to think that what he had to say was far more important than what the three female panelists had to say. Privilege normalizes silencing tactics, as “those with power assume they have the best interests of the organization at heart and assume those wanting change are ill-informed (stupid), emotional, inexperienced” (Jones & Okun, 2001). Yet not one person – not even the session moderator – intervened and returned the session to the presenters.

If others have similar stories, please share in the comments. No longer can we permit anyone to degrade, diminish or dismiss someone else’s work in AEA spaces. When it happens, we must lean into the discomfort and shine light onto the dark veil of sexism, racism, elitism, etc. right then and there. If we don’t, then we are complicit in allowing the abuse of power to continue.

Personally, I can no longer carry the burden of guilt and shame for allowing myself or my fellow evaluators to be silenced while I say nothing. Enough is enough. A new day is dawning, and it is time to speak truth to power in the moment when power is attempting to silence someone. Will you join me?

Rad Resources:

Virginia Stead’s: RIP Jim Crow: Fighting racism through higher education policy, curriculum, and cultural interventions.

Jones & Okun’s: White supremacy culture. From Dismantling racism: A workbook for social change groups.

Gary Howard’s: We can’t teach what we don’t know.

Ali Michael: How Can I Have a Positive Racial Identity? I’m White!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Scott Chaplowe and I currently work as the Director of Evidence, Measurement and Evaluation for climate at the Children’s Investment Fund Foundation (CIFF). As an evaluation professional, much of my work is not simply doing evaluation, but building the capacity of others to practice, manage, support and/or use evaluation. I’ve discovered I am not alone as other evaluation colleagues have echoed similar experiences with evaluation capacity development (ECB).

Hot Tips: Based on an expert lecture I gave on this topic at AEA2017, here are 5 considerations for building Evaluation capacity:

  1. Adopt a systemic (systems) approach to organizational evaluation capacity building (ECB). ECB does not happen in isolation, but is embedded in complex social systems. Each organization will be distinct in time and place, and ECB interventions should be tailored according to the unique configuration of different factors and actors that shape the supply and demand for ECB. Supply refers to the presence of evaluation capacity, (human and material), and demand refers to the incentives and motivations for evaluation use. The conceptual diagram below illustrates key considerations in an organizational ECB system.

  1. Plan, deliver and follow-up ECB with attention to transfer. If organizational ECB is to make a difference, it is not enough to ensure learning occurs; targeted learners need to apply their learning. As Hallie Preskill and Shanelle Boyle aptly express, “Unless people are willing and able to apply their evaluation knowledge, skills, and attitudes [“KSA”] toward effective evaluation practice, there is little chance for evaluation practice to be sustained,
  2. Meaningfully engage stakeholders in the ECB process ECB will be more effective when it is done with rather than to organizational stakeholders. Meaningful engagement helps build ownership to sustain ECB implementation and use. It is especially important to identify and capitalize on ECB champions, and mitigate ECB adversaries who can block ECB and its uptake.
  3. Systematically approach organizational ECB, but remain flexible and adaptable to changing needs. ECB is intentional, and therefore it’s best orderly planned to gather information and analyze demand, needs and resources, identify objectives, and design a realistic strategy to achieve (and evaluate) ECB objectives.

However, a systematic approach does not mean a rigid blueprint that is blindly followed, which can inhibit experimentation to respond to changing capacity needs. ECB should remain flexible to adapt to the dynamic nature of the ECB system, which will vary and change over time and place.

  1. Align and pursue ECB with other organizational objectives. ECB should not be “silo-ed,” but ideally planned with careful attention to other organizational objectives and capacity building interventions. Consider how ECB activities complement, duplicate or compete with other capacity building activities.

Rad Resources – Read more about this top 10 list here and you can view the AEA365 presentation. Also, check out the book, Monitoring and evaluation Training: A Systematic Approach, and this webpage has an assortment of resources to support evaluation learning and capacity building.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello, my name is Jayne Corso and I am the community manager for the American Evaluation Association.

Social media offers a great way to have conversations with like-minded individuals. But, what if those like-minded individuals don’t know you have a Facebook, Twitter, or LinkedIn page. I am sharing just a few easy tips for getting the word out on your social media channels.

Hot Tip: Have Social Media Predominately Displayed on Your Website

A great way to show that you are on social media channels is to display social media icons at the top of your website. Some organizations put these at the bottom of their website where they usually get lost—when was the last time you scrolled all the way to the bottom of a website?

Moving your icons to the top of your website is also helpful for mobile devices. More and more people are using their cell phones instead of desktops to search website. With the icons above the “fold” or at the top of your page, they are easy to find no matter what device you are using.

Hot Tip: Reference Social Media in Emails

You are already sending emails to your followers or database, so why not tell them about your social media channels? You can do this in a very simple way, by adding the icons to your email template, or you can call out your social channels in your emails. Try doing a dedicated email promoting your social channels. Social media is the most direct way to communicate with your followers or database, so showcase this benefit to your fans!

Hot Tip: Continue the Conversation on Social Media

Moving conversations to your social media pages can add longevity to your discussion and invites more people to participate. If you have written an email about an interesting topic, invite your database to continue the conversation on Twitter. You can create a hashtag for your topic, so all posts can be easily searched. You can also do this on Facebook and encourage a conversation in the comments of a post.

I hope these tips were helpful. Follow AEA on Facebook and Twitter!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

<< Latest posts

Older posts >>

Archives

To top