AEA365 | A Tip-a-Day by and for Evaluators

TAG | leadership

Hello! I am Jennifer Obinna, Ph.D., M.S.S.W., and I lead evaluation teams at The Improve Group, a research and evaluation consulting firm in St. Paul, Minnesota. Whether I am leading an internal team that reports to me or a collaborative team with clients and community partners, I look for ways to share power so that everyone can contribute their best.

My approach to leading teams has three dimensions: facilitative leadership, building on strengths, and supporting collaboration.

Facilitative leadership” is my preferred style, because it supports a collaborative environment. Sometimes people compare and contrast characteristics of facilitative and hierarchical leaders this way:

  • Hierarchical leaders assume top-down authority, know what to do, seek the “right” decision, and rely on individuals.
  • Facilitative leaders assume the power of the team as a unit, know the various “how to” methods, seek team ownership, and rely on the team’s collective ability to take action.

Facilitative leadership is important when you want buy-in, adaptability, and creativity from the team. In our work, we are seeing a shift to facilitative leadership because it is dynamic and empowered.

Lesson Learned: Build on strengths and passions

In our team discussions, we often think about how some tasks come easily to us, while others take more time and effort to master. We can liken this to using one’s dominant hand compared to the non-dominant one. Knowing what comes easily to each team member and which tasks will take longer helps us think through task assignments and what kind of support team members will need to succeed.

Similarly, we pay attention to which tasks make us feel energized and which leave us feeling cold and depleted. A team member may be competent at tasks that leave them uninspired. Others may have a passion to learn something new, but need more time and coaching as they master new skills. Paying attention to strengths and passions helps develop team members individually, shows the team how to support each other with a balanced workload, and allows teams to thrive.

Lesson Learned: Support collaboration

At its best, collaboration is a form of co-creation. Early in a project is a fertile time to collaborate. We start each workplan with a sketch. Instead of having one team member responsible for the initial sketch, we encourage all team members to brainstorm ideas to define and refine the actions together. This connects team members to the work and creates better products and outcomes.

Rad Resources

The American Evaluation Association is celebrating Evaluation Teams Week. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Greetings, I am June Gothberg, Ph.D. from Western Michigan University, Chair of the Disabilities and Underrepresented Populations TIG and co-author of the Universal Design for Evaluation Checklist (4th ed.).   Historically, our TIG has been a ‘working’ TIG, working collaboratively with AEA and the field to build capacity for accessible and inclusive evaluation.  Several terms tend to describe our philosophy – inclusive, accessible, perceptible, voice, empowered, equitable, representative, to name a few.  As we end our week, I’d like to share major themes that have emerged over my three terms in TIG leadership.

Lessons Learned

  • Representation in evaluation should mirror representation in the program. Oftentimes, this can be overlooked in evaluation reports.  This is an example from a community housing evaluation.  The data overrepresented some groups and underrepresented others.

 HUD Participant Data Comparison

  • Avoid using TDMs.
    • T = tokenism or giving participants a voice in evaluation efforts but little to no choice about the subject, style of communication, or any say in the organization.
    • D = decoration or asking participants to take part in evaluation efforts with little to no explanation of the reason for their involvement or its use.
    • M = manipulation or manipulating participants to participate in evaluation efforts. One example was presented in 2010 where food stamp recipients were required to answer surveys or they were ineligible to continue receiving assistance.  The surveys included identifying information.
  • Don’t assume you know the backgrounds, cultures, abilities, and experiences of your stakeholders and participants. If you plan for all, all will benefit.
    • Embed the principals of Universal Design whenever and wherever possible.
    • Utilize trauma-informed practice.
  • Increase authentic participation, voice, recommendations, and decision-making by engaginge all types and levels of stakeholders in evaluation planning efforts. The IDEA Partnership depth of engagement framework for program planning and evaluation has been adopted in state government planning efforts across the United States.

 IDEA Partnership Leading by Convening Framework

  • Disaggregating data helps uncover and eliminate inequities. This example is data from Detroit Public Schools (DPS).  DPS is in the news often and cited as having dismal outcomes.  If we were to compare state data with DPS, does it really look dismal?2015-16 Graduation and Dropout Rates

 

Disaggregating by one level would uncover some inequities, but disaggregating by two levels shows areas that can and should be addressed.2015-16_Grad_DO_rate_DTW_M_F

 

 

We hope you’ve enjoyed this week of aea365 hosted by the DUP TIG.  We’d love to have you join us at AEA 2017 and throughout the year.

The American Evaluation Association is hosting the Disabilities and Underrepresented Populations TIG (DUP) Week. The contributions all week are focused on engaging DUP in your evaluation efforts. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

Hi all! My name is Sheena Horton, President-Elect and Board Member for the Southeast Evaluation Association (SEA). As I have been learning more about the traits of great leaders and how leaders mobilize others, I have found one element that is frequently mentioned: a leader’s influence.

Influence may seem like an obvious determinant of a leader’s success; you’re not a leader if no one will follow you. Think about a colleague for whom you would work hard for or without hesitation, and then think about a colleague for whom you would not. Why do you want to help the first colleague, but avoid the second?  What makes some leaders more effective than others? How do leaders influence others?

Hot Tips:

  • Ask. Show interest in your colleagues. Ask about their day, goals, and challenges. Build rapport and be people-focused instead of task-focused. Understanding their needs will help you convey to them the benefits of listening to you.
  • Listen. Effective leaders take the time to listen. There is a difference between leading and simply managing. Managers command action while leaders inspire it. Leading is to be focused on others – not yourself.
  • Visualize the other side. Try to understand the other person’s perspective and motivations. By doing so, you will be in a better position to address their concerns, tap into their motivations, and utilize their strengths and interests to build a more effective and mutually beneficial working relationship.
  • Be proactive. Identify, monitor, and manage risks to your team’s success. Ask your team what they need to complete their tasks, and make sure they have what they need to get things done. Address issues quickly and directly.
  • Build credibility through your actions. Consistency is key; unpredictability weakens your ability to influence and lead. Build trust and credibility by following through on what you say. Be the person that others seek out for solutions. Provide reasons for the actions you want taken.
  • Show appreciation. A simple “thank you” or “good job” can go a long way. Express your interest and investment in your team’s growth and success by providing constructive feedback. This feedback provides valuable insight, builds trust, and is an opportunity to motivate. Be supportive by mentoring or providing training or assistance.

Remember: Leadership is not about you. It’s about them. Leadership is about influencing others so they will want to help you.

Rad Resources:

The American Evaluation Association is celebrating Southeast Evaluation Association (SEA) Affiliate Week with our colleagues in the SEA Affiliate. The contributions all this week to aea365 come from SEA Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi, I’m Jennifer V. Miller. For my entire career, I’ve been in some sort of consultative role – either internally as a human resource generalist and training manager in corporate America, or for my consulting company SkillSource.

When you are a consultant your primary role is to assess, then make recommendations for improvement. It’s my observation that people will not take action on your recommendations if they don’t trust you. What follows is my take on trust-building with your customers. “Customers” in this context is anybody who is asking for your professional recommendation. For evaluators this affects the entire process from initial consultation to customer utilization of your final recommendations.

Lesson Learned:

Customers use several measuring sticks for gauging whether or not they trust the advice they’re getting from their consultant. For one, they’re checking out what direction your moral compass points. They’re watching to see if you act with integrity.

Here’s something I learned a long time ago: in your customer’s eyes, integrity is only the start of building a trusting customer-consultant relationship. You see, it’s not enough to behave ethically to be seen as trustworthy.  You also need to understand your customers’ unique trust filters, which they apply in addition to their perceptions of your moral compass.

Hot Tip:

A customer’s personality is reflected in their trust filters.  For example, some folks are naturally more people-focused; others are more detail-oriented. Some people are hard-charging “get it done” types. Your customers are viewing all of your actions through the filter of these personality preferences. If, as a consultant, your actions don’t match up with their natural priorities, then your recommendations may not be fully trusted. Four typical trust filters are:

  • Quality – does your work standard meeting that of your customer’s?
  • Getting Results – do you deliver results in the timeframe the customer expects?
  • Sociability – are interpersonal considerations as important to you as task-related issues?
  • Dependability – can the customer depend on you to deliver what you promise?

Your customers are using all four of these filters . . . but most likely, they are relying more heavily on one of them – based on their personality. Pay attention and respond accordingly.

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

I’m Kelly Hannum. I’ve been evaluating leadership development programs for almost two decades. I am convinced that effective leaders and effective evaluators have similar mindsets and employ similar skills.  I encourage leaders to think like evaluators, and via this post I’m encouraging evaluators to think and develop themselves as leaders.

At the Center for Creative Leadership, we say “effective leadership results in shared direction, alignment, and commitment”. Leaders help focus people on defining and achieving something of shared value, but effective leadership is often a collective act.  How often you have worked with diverse stakeholders to create shared direction, alignment, and commitment related to an evaluation? Stakeholders often have different values and perspectives. Our role as evaluators is to effectively and respectfully lead these complex situations in a manner that reflects our Guiding Principles. What does “value” look like from different perspectives? What types of evidence of “value” are appropriate? Our training and experience is a powerful asset, but if left unchecked our assumptions can be a liability. Thinking of, and developing, ourselves as leaders can help us improve our evaluation practice.

Lessons Learned:

Be curious about yourself. Self-awareness is the foundation for being a good leader and for being a good evaluator. Understand your assets and limitations, plan accordingly, and continue to develop yourself. Challenge assumptions that may get in the way of understanding value from different perspectives.  Seek, consider, and apply feedback about yourself.

Be curious about others. Pay attention to other perspectives, that is the foundation for respect and understanding of complex situations. Examine and reexamine perceptions and beliefs, assumptions or stereotypes, about individuals, groups, and even programs and processes. Seek different perspectives and listen with curiosity and openness.

Hot Tip:

Reflect on how you create shared direction, alignment, and commitment. Think about keeping a journal or having informal conversations or debriefs after key meetings.

Most successful development experiences contain elements of assessment, challenge, and support – are you balanced? What do you need to add or reduce?

  • Assess yourself from different perspectives to uncover areas of excellence as well as areas for growth
  • Challenge yourself by learning about and trying new things
  • Get the support you need to be effective

Rad Resources:

Track your reflections using a free online journal like Penzu.

The Leadership Learning Community offers a collection of free leadership development resources including evaluation of leadership development.

The Center for Creative Leadership offers free articles and podcasts. The white papers are particularly helpful.

AEA’s Statement on Cultural Competence in Evaluation provides an overview of cultural competence, why it is important, and how to develop it.

Flipping the Script: White Privilege and Community Building is useful to my work.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

I am Rob Fischer and I have just completed six years as board president of the Ohio affiliate of AEA, the Ohio Program Evaluators’ Group (OPEG). Founded in 1980, OPEG has a longstanding commitment to evaluation professionals in the state. Our annual paid membership fluctuates between 120-150.

In my total of eight years on the board I have learned much about how to keep an affiliate strong. As much as I would like to claim that top leadership is the driver behind an affiliate’s success, I know too well that this is only part of the story. It has been said that evaluation is a team sport, and so it is no surprise that the same is true of the work of AEA affiliates. A key task is maintaining a solid and active board and committee structure. I would like to offer three recommendations and three challenges to those working to keep their affiliate strong.

Hot tip:  Mix it up The volunteer board needs to reflect the diverse reality of evaluators working in the field and integrate the strengths of working in different settings.  It is wise to include individuals from the public sector, philanthropy, independent consultants, as well as academics. Having multiple sectors represented increases the likelihood that programs and services will stay relevant to a broad constituency and will also help with dissemination and recruitment. Practically speaking, individuals in these different positions have varying levels of flexibility and types of resources. Plus, we all know that having too many Ph.D.s involved means less will get done!

Hot tip:  Avoid burn-out Volunteer boards are great but we run the risk of exhausting individuals who end up with disproportionate amounts of work. In OPEG the VP position is also the Program Chair in charge of planning two annual state-wide conferences. It’s no surprise that few VPs have held on to become President! This is something our board is working on. Areas that can lend themselves to bringing on paid supports that relieve burden on volunteers are website, administrative services, and conducting mailings to members.

Hot tip: Be predictable What is poison in a relationship is nectar in a professional association. Members need to know that programs and events are regularly held so that, even if they haven’t seen the flyer yet, they know it is coming and can plan accordingly.

Challenges (if you have the solution please let me know!)

1)   Convincing professionals to join the association even if they do not plan on attending the conference.

2)    Balancing broad-scale accessibility of events with a need for regional/local events that are more frequent.

3)   Constantly recruiting new leadership

Hot Tip: Registration for OPEG’s interactive and experiential October 13th Fall Workshop, Embracing Change through Evaluation, is now open at www.OPEG.org.

The American Evaluation Association is celebrating the Ohio Program Evaluators’ Group (OPEG) Affiliate Week with our colleagues in the OPEG AEA Affiliate. The contributions all this week to aea365 come from our OPEG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello! My name is Cathy Bear, and I am an assistant professor in the School of Education at Maryville University in St. Louis. I serve as one of four full-time faculty members in the educational leadership programs at both the master and doctoral levels.

Rad Resource: The Master of Arts and EdD in Educational Leadership are cohort-based programs designed to provide practical, experiential learning built upon the Educational Leadership Constituent Council (ELCC) Standards for School Leaders (NPBEA, 2002).  In 2007, we embarked on an action research project focused on assessment of our candidates with respect to the ELCC Standards. We developed a self-assessment tool that asks candidates to assess their level of proficiency along the following continuum:

  • 0 = “I have little or no experience nor demonstrated success with this standard element.”

  • 1= “I have attained some experience with this standard element; however I have little or no demonstrated success applying the standard element to my daily practice.”

  • 2= “I have developed competence with respect to this standard element and I am able to apply these skills effectively in my daily practice.”
  • 3= “I have consistently demonstrated success in implementing this standard element, and I have a strong track record of success with attaining student achievement goals related to this standard element.”

Hot Tip: The self-assessment is used in different ways at three benchmarks in the program:

  • At the beginning of the program, candidates are asked to review their results and develop a Personal Leadership Growth Plan. The candidate then reviews this plan with the advisor and internship mentor to design experiences tailored to the candidate’s learning needs.
  • At the mid-point of the program, the candidate completes the assessment and notes areas in which he or she can demonstrate growth. The candidate writes a rationale to justify the decision to move forward on the proficiency continuum. The candidate also provides evidence of this growth in the form of documentation that becomes part of the program portfolio.
  • At the end of the program, the candidate completes a final self assessment. The candidate writes another rationale to justify movement along the continuum and provides evidence of this growth. The candidate then writes a reflection of growth over the course of the entire program.

Hot Tip: In addition to providing our candidates with a mechanism for monitoring growth, we use the results to identify strengths and weaknesses in program curricula.  We use the results to determine areas of emphasis needed for a given cohort. Our research has revealed a need to focus more attention on Standard 4.0 dealing with communication and stakeholder involvement, and on Standard 6.0 dealing with leadership within the larger context.

The research has provided us with information about each candidate’s growth and proficiency relative to the ELLC Standards. We have found this information useful in making recommendations regarding each candidate’s potential to be an effective school and/or district level leader.

References:

National Policy Board for Educational Administration. (2002). Educational leadership constituent council standards for school leaders.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

·

My name is Aimee Sickels and I am the Principal Evaluator and owner of Custom Evaluation Services, an independent evaluation consulting business. I am also currently working on a Ph.D. in Social Work at the University of South Carolina. I am studying about systems approaches to solving some of our most complex social problems. What I’ve learned in my studies and in my work as an evaluator is that collaborative leadership is key for systems work. I find however that my clients often struggle with what collaborative leadership is and exactly how it should function when working with multiple systems of service providers. My helpful hints are related to collaborative leadership.

Hot Tip #1: Help leadership define what “they” mean by collaboration for the purposes of their project. These resources have been helpful for me:

www.Collaborative-Leaders.org

http://www.collaborativeleadership.org/

http://en.wikipedia.org/wiki/Collaborative_leadership

Hot Tip #2: Once you get an operational definition of collaboration down for the working group, spend time going “Back to the Basics.” I have found that we as evaluators assume too much when entering these complex systems projects – this is a tremendous detriment.

Hot Tip #3: I have often worked in the field of Child Abuse and Neglect Prevention and find that, when I have various system representatives at a collaborative leadership table, I have several different definitions of Child Abuse and Neglect Prevention. I am sure this is the case across many service sectors. So, before I begin any collaborative work, I insist on shared definitions of the “issue” we are working to reduce or eliminate. It takes less than 15 minutes to put up a piece of chart paper, title it “Common Definition of the Problem,” and have everyone come to consensus on a working definition. Remind everyone that this definition is for that specific working group, it may differ in another group. Write it large for everyone to see and reference as the meetings continue and make sure it is present at EVERY meeting. Keep it Simple!

Once these core pieces are in place the group is ready to work as a system to solve the problems at hand!

The American Evaluation Association is celebrating Systems in Evaluation Week with our colleagues in the Systems in Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our Systems TIG members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting Systems resources. You can also learn more from the Systems TIG via their many sessions at Evaluation 2010 this November in San Antonio.

· · ·

My name is Susan Kistler,  and I am AEA’s Executive Director. I contribute each Saturday’s post to aea365.

I have been reading and thinking about learning from failure. For those of us who are data-lovers, who find security in information, it can be a challenge to overcome the tendency to want to collect all possible information, explore all feasible outcomes, before moving in a new direction. While I’ll never be one to dive in without testing the waters, I do want to go swimming a bit more often.

Rad Resource: Cannon and Edmondson’s 2004 paper Failing to Learn and Learning to Fail (Intelligently): How great organizations put failure to work to improve and innovate outlines three processes needed to fail intelligently – and positively:

  1. Identifying Failure: “The key organizational barrier to identifying failure has mostly to do with overcoming the inaccessibility of data that would be necessary to identify failures.” (p. 10)
  2. Analyzing and Discussing Failure: Create an open environment for discussing failures and overcome negative emotions associated with examining one’s own failures.
  3. Experimentation: Put new ideas to the test, gather comparative data, and embrace both those that succeed and those that fail as contributing to the ultimate success of an endeavor.

Key takeaways include:

  • “Most managers underestimate the power of both technical and social barriers to organizational failure.” (p.3) Technical barriers include ensuring stakeholders have the know-how to use and analyze data to learn; organizational barriers include rewarding only success.
  • We must pay attention to and learn from small failures to prevent larger failures.
  • “Creating an environment in which people have an incentive, or at least do not have a disincentive, to identify and reveal failures is the job of leadership.” (p. 13)
  • “Conducting an analysis of failure requires a spirit of inquiry and openness, patience, and a tolerance of ambiguity. However, most people admire and are rewarded for decisiveness, efficiency and action rather than for deep reflection and painstaking analysis.” (p. 14)
  • “It is not necessary to make all [stakeholders] experts in experimental methodology, it is more important to know when help is needed from experts with sophisticated skills.” (p. 26)

For me, Cannon and Edmondson reaffirmed the value of formal and informal evaluation and its role in innovation. They made it clear that data-lovers are uniquely positioned to fail intelligently.

Rad Resource: Want to think further about learning from failure? Try Giloth and Gewirz (from Annie E Casey) Philanthropy and Mistakes: An Untapped Resource in the Foundation Review (free online), or Bumenthal’s blog post on four steps to failing well (Fail Small, Fail Publicly, Fail to Win, Fail Proudly), or Pond’s Embracing Micro-failure (Sarajoy is an aea365 contributor, AEA member, and social entrepreneur!).

Note that the above reflects my own opinions and not necessarily that of my employer, AEA.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· · ·

Archives

To top