AEA365 | A Tip-a-Day by and for Evaluators

Hi! We are Shelly Engelman, Kristin Patterson, Brandon Campitelli, and Keely Finkelstein of the Texas Institute for Discovery Education in Science (TIDES) at the University of Texas, Austin. The mission of TIDES is to promote, support, and assess innovative, evidence-based undergraduate science education. A large part of our work entails working with STEM faculty to evaluate the efficacy and impact of education programs on students.

At the beginning of every project, our tendency as evaluators is to generate a logic model to visually represent how a program is intended to work and bring about change. Recently, however, faculty and staff’s reactions to logic models forced us to change directions and create a new, more useful and palatable format.

Here are a few quotes from faculty highlighting some of the impediments to using logic models with STEM faculty:

This [logic model] is really hard to digest and full of jargon. What’s the difference between an output and an outcome?

Wow…this is over my head. I’d like to see this summarized in a table format.

Cool…but, I don’t know how useful this will be. It would helpful if I saw a timeline with clearly delineated roles and responsibilities.

How does this logic model relate to the evaluation plan? I’d like to see the logic model and evaluation plan on one page…in one figure.

Where are my program’s goals in this model? Can we emphasize them more?

Hot Tips:

Using a ‘who, what, when, why, and how’ approach, we re-designed our logic model to use terms, concepts, and a format that is more familiar to STEM faculty and staff. This approach not only captures the program essentials, but also assigns roles and responsibilities to staff members, establishes a timeline of events, integrates the evaluation component, and emphasizes the program’s goals. Instead of nebulous concepts framing the logic model (e.g., outputs), our re-designed approach is organized by the following questions; note how they are aligned to components of a logic model/evaluation plan:

Questions                                                                              aligned to…

Why (are we doing this?)                                                  Impact/Outcomes

What (are we doing?)                                                        Activities

Who (is responsible?)                                                        Inputs

When (will it be done?)                                                     Timeline

How (will we evaluate                                                        Evaluation methodology

it to know if it was done effectively?)

Cool Tricks: Use a table format to rearticulate a classic logic model

Guided by these questions, it is fairly straightforward to rearticulate a logic model into a table format. We found that non-evaluators gravitate to a table because they can clearly see the alignment between their program’s goals, activities, timeline, and the evaluation methodology. Here is an example below:

Contribute Your Own Best Practices

We appreciate that the evaluation community has more to learn about effectively communicating logic models to non-evaluators. Previous AEA 365 blog posts by Corey Smith and Matt Keene suggest that there is a need to explore alternative approaches to logic models. We invite you to share your best practices. For those interested, we could put together a panel presentation at a future AEA conference!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! I’m Rick Davies, Evaluation consultant, from Cambridge, UK.

Predictive analytics is the use of algorithms to find patterns in data (e.g. clusters and association rules) by inductive means, rather than by theory led hypothesis testing.  I can recommend three free programs:  RapidMiner StudioBigML and EvalC . My main use of these has been to develop prediction models, i.e. find sets of attributes that are associated with an outcome of interest.

Here are some situations where I think prediction modelling can be useful, when looking at international development aid programs:

  1. During project selection:
    • To identify what attributes of project proposals are the best predictors of whether a project will be chosen for funding, or not
    • To identify how well a project proposal appraisal and screening process is as a predictor of the subsequent success of projects in achieving their objectives
  2. During project implementation:
    • Participants’ specific and overall experiences with workshops and training events
    • Donors’ and grantees’ specific and overall experiences of their working relationships with each other
  3. During a project evaluation:
    • “Causes of effects” analysis: To identify what combination(s) of project activities (and their contexts) were associated with a significant improvement in beneficiary’s lives.
    • “Effects of causes” analysis: To identify what combinations of improvements in beneficiaries’ lives were associated with a specific project activity (or combination of)
    • To identify “positive deviants” – cases where success is being achieved when failure is the most common outcome.

BigML and RapidMiner have more capacities than I needed. So, I developed EvalC3, an Excel app available here, where a set a set of tools is organised into a workflow:

In the Input and Select stages choices are made about what case attributes and outcomes are to be analysed. In the Design and Evaluate stage users can manually test prediction models of their own design or they can use four different algorithms to find the best performing models. Different measures are available to evaluate model performance. All models can be saved, and case coverage of any two or more models can be compared. The case membership of any one model can also be examined in more detail. This last step is important because it enables the transition from cross-case analysis to within case-analysis. The latter is necessary to identify if there is any casual mechanism underlying the association described by the prediction model.

The workflow design assumes that “Association is a necessary but insufficient basis for a causal claim,” which is more useful than simply saying “Correlation does not equal causation.”

Lessons Learned:

Hot Tip:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I’m Bronwyn Mauldin, Director of Research and Evaluation at the Los Angeles County Arts Commission. I’m going to share the informal peer review process we use to improve the quality of our work.

Even if you’re not writing for an academic journal you want to make sure your methods are rigorous, your findings watertight, your final report lucid and clear. How can you get an objective assessment prior to print if your report doesn’t go through peer review? Ask an external colleague who works in the same field or uses similar methods to read it and give you feedback. In fact, ask two or three of them. Here at the LA County Arts Commission we’ve established a practice of doing this for every research or evaluation report we publish. It’s a simple idea we’ve found to be remarkably beneficial.

This practice is especially useful for those of us who work in that area some call “gray literature” published by nonprofits, foundations, government or other non-academic institutions. While we may have the advantage of working closely with practitioners and subject-matter experts, we have less access to the kind of meticulous critique available in the academy.

Rad Resource: Your colleagues. Identify three or four experts outside of your organization, then ask them to review your report and comment on it. Provide guiding questions so they’ll pay attention to your key issues, but be open to whatever else they find. Be sure to credit your reviewers in the final report.

Lesson Learned: People can be remarkably generous with their time and expertise. We’ve sent reports to reviewers that run to 70 pages or more, and others that were loaded with charts and graphs. Most people we’ve asked delivered thoughtful, thorough feedback.

Lesson Learned: Timing and communication are critical. Reach out to potential reviewers to get their commitment early in the writing phase. Send them the finished report when the text and charts are complete (but before the design phase). Give reviewers enough time for their review based on the length and complexity of the report, and a clear deadline. It might take a reminder or two, but most people eventually come through.

Cool Trick: Don’t limit yourself to colleagues you know. Contact the top experts in your field – both academics and others. This is also a great way to raise your profile with experts you’d like to get to know.

Independent evaluators who want to use informal peer review will probably need to let the institution you’re working for know what you’re planning in advance. Invite them to recommend experts to serve as reviewers.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Hello, my name is Jayne Corso and I am the community manager for American Evaluation Association. Looking for any easier way to post on all of your social media channels? Try Hootsuite! I use Hootsuite to manage AEA’s Twitter and Facebook pages. Hootsuite is a social media management tool that helps you monitor your social channels and track what people are saying in the field. Here are a few tips for using the tool!

Rad Resource: Monitor multiple channels

The best feature of Hootsuite is that it allows you to manage multiple social media streams on one dashboard.  Through this tool, you can manage:

  • Twitter accounts
  • Facebook Profiles, Events, Groups, and Pages
  • LinkedIn Profiles, Pages, and Groups
  • Google+ Pages
  • Foursquare

Multiple dashboards allow you to post easily to all channels in one location. You can even post the same content across multiple platforms. However, be careful here—your Facebook fans and Twitter followers may have different needs.

Rad Resource: Schedule Posts

The scheduling feature on Hootsuite is very beneficial, especially for the busy professional who still wants to have a presence on social conversations. Hootsuite allows you to determine the time, date, and channel for your post. I recommend not posting too far in advance in order to stay relevant with your followers.

Rad Resource: Create Custom Dashboard

Hootsuite allows you to customize the information you see about each of your social media channels.  For example, if you add your Twitter account to Hootsuite, you can customize the dashboard to view your newsfeed, mentions from other twitter users, your tweets, new followers, retweets, scheduled tweets, and even more. This allows you to see all the pieces of information that are truly relevant to your needs.

Rad Resource: Monitor keywords and hashtags

In addition to creating streams for your social media channels, you can create streams for keywords and hashtags which allow you to follow conversations in the field. By simply choosing “add stream” then select “search” or “keywords” you can enter keywords, phrases, or popular hashtags. Follow words such as evaluation, #eval, data visualization, or #dataviz. Hootsuite will show you all of the tweets and posts related to this theme or topic. This is a great way to stay on top of the latest conversations in the field.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Sheena Horton

Sheena Horton

Do you consider meetings to be where time and productivity go to die? You’re not alone! We have all been in ineffective, mind-numbing meetings, but both leaders and attendees are responsible for a meeting’s success. I’m Sheena Horton, Consultant/Project Manager for MGT Consulting Group and President of the Southeast Evaluation Association, and I have some tips for how we can all contribute to having more mindful, productive meetings.

 

Hot Tips and Rad Resources – As a Leader:

  • Before scheduling a meeting, consider whether you need one. Is an email more appropriate for the topic or audience? Have a clear purpose and expectations for what is to be gained from the meeting.
  • Determine who needs to attend to accomplish the desired goals. The fewer, the better! Small groups encourage a more conversational tone and increased engagement from all attendees versus a large group.
  • Avoid scheduling last minute or having long meetings. Allow time for yourself and attendees to prepare for the meeting. Be considerate of your attendees’ other commitments. Keep meetings short; 35-45 minutes is ideal. People become restless, tired, and disengaged during longer meetings. Your meeting is too long if you need breaks!
  • Always provide an agenda to attendees at least 1-2 days before the meeting. Do not delegate the meeting planning; it’s your meeting – own it! Create a simple bulleted list agenda or download a free template from Microsoft (https://templates.office.com/en-us/Agendas) or TidyForm (https://www.tidyform.com/agenda-template.html), or utilize planning tools like Agreedo (https://www.agreedo.com). List agenda items as questions to encourage brainstorming, and avoid too many agenda items. After sharing the agenda, ask attendees if they have any items to add or questions. This will help you prepare answers ahead of the meeting.
  • Designate a notetaker. A record is vital for ensuring what was accomplished during a meeting is not forgotten or lost and is useful for upholding accountability regarding any assigned tasks. Minutes should be distributed to attendees promptly after the meeting.
  • Be mindful of meeting start/stop times and moderate as appropriate. Stick to your agenda. Set expectations for the meeting’s purpose and for how the meeting will be conducted. Robert’s Rules of Order (http://www.robertsrules.org) is a common resource used to govern meeting procedures. Allow for creativity and interaction among attendees, such as mind mapping using post-it notes or online tools like MindMeister (https://www.mindmeister.com).
  • Determine outcomes for agenda items. Resolve one agenda item before moving to the next. Determine the who, what, and when for assigned tasks. Follow up on action items after the meeting and track progress. Smartsheet offers great templates (https://www.smartsheet.com/15-free-task-list-templates) for fast tracking.

Hot Tips and Rad Resources – As an Attendee:

  • Respond promptly to meeting invitations, review the agenda, and arrive on time. Consider agenda items carefully, brainstorm what you can contribute, and send questions to your meeting leader at least a day before the meeting.
  • Avoid distractions during meetings and be engaged! Make the time spent worthwhile for everyone.
  • Follow through on assigned action items. Adhere to deadlines and keep your meeting leader informed of progress.

The American Evaluation Association is celebrating Business, Leadership and Performance (BLP)  TIG week. All posts this week are contributed by members of the BLP Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

John Murphy

John Murphy

Greetings, fellow evaluation enthusiasts! My name is John Murphy and I am an Evaluation Associate on the Evaluation Research and Measurement team at Cincinnati Children’s Hospital Medical Center. Our team, historically focused on program evaluation, was placed into the driver’s seat of the employee engagement surveys after a departmental reorganization. Our strengths in problem-solving, data literacy, and survey design generated this opportunity for us, but we lacked what The Power of People calls “HR sixth sense”, the ability to intuit the most relevant variables out of the myriad available to human resources (HR) professionals. That, along with the need to understand the different functions and relationships within the human resources department of a large organization, made for a challenging period of growth for our team.

hands on computer keyboard next to notebook and pen

After six months of struggle and success, we are sharing a few discoveries that might help others prioritize how they use their valuable resources of time and energy.

Lesson Learned 1: Find “thought leaders” with institutional and HR experience who can open up your mind to the intricacies of understanding employee experience.

This can be simple, like having lunch with a veteran manager or more complicated, like a series of roundtables complete with PowerPoint or Prezi presentations and flipcharts. As long as it helps you find out what drives engagement, consider it time well spent.

Lesson Learned 2: Resist the desire to change everything and “make it your own.” Instead, focus on understanding the reasoning behind decisions that have been made.

People, talking, using cell phones, reading magazine

We inherited the employee engagement survey process from knowledgeable staff members, and while we had the brief temptation to make widespread changes, we resisted. Our predecessors had many great processes in place. Understanding those processes and making incremental changes saved us the time of having to vet new processes and introduce those new processes to a large organization. Those thought leaders we cultivated provided us with the perspective needed to see how decisions that were made affected the organization as a whole. For example, before we decided to revise what seemed to be an extraneous question, we talked to organization leaders and found that the results from that item were being used in decision-making for one part of the hospital.

No tags

Heather Esper and Yaquta Fatehi

Heather Esper and Yaquta Fatehi

Hello, this is Heather Esper and Yaquta Fatehi of the Performance Measurement Institute at the University of Michigan (http://wdi.umich.edu/about). Our team specializes in performance measurement to improve organizations’ effectiveness, scalability, and sustainability and create more value for their stakeholders, in low- and middle-income countries.

The William Davidson Institute (WDI) uses social well-being indicators to address business challenges and improve organizational learning. We believe assessing multidimensional outcomes of well-being helps inform better internal decision making within businesses. These multidimensional outcomes move beyond economic outcome indicators such as income and savings; and include capability and relationship well-being indicators. Capability refers to constructs such as the individual’s health, agency, self-efficacy, and self-esteem. Relationship well-being refers to changes in the individual’s role in the family and community as well as the quality of the local physical environment.

image of computer with coffee cup, notepad, and pen

For example, we conducted an impact assessment of a last mile distribution venture and focused on understanding the relationship between business and social outcomes. Through a mixed methods design, we found a relationship between employee self-efficacy (one’s belief in their ability to do certain tasks) and two major challenges the venture was facing: turnover and sales. We recommended the venture augment their current trainings to employees to increase self-efficacy which would in turn hopefully increase retention and improve sales. Based on this finding, we also recommended that certain high-priority social well-being indicators such as self-efficacy be monitored on a quarterly basis along with key business performance indicators. In another engagement, we relied heavily on the organizations’ proposed theory of change to offer examples and solutions of how to track and link socio-economic and business impacts. For example, in one enterprise, we found that their ability to retain current and recruit new micro-distributors could be influenced by sharing the improvement in standard of living experienced by the micro-distributors’ children.

Hot Tip: Social well-being data can provide insights for organizational learning. Businesses can use ‘pause and reflect’ sessions to examine such data across different teams to draw new insights, as well as discuss challenges and identify lessons learned related to collecting such data to enhance efficiency and rigor.

Lesson Learned: Embedding social metrics into existing processes often requires a monitoring and evaluation champion (i.e., a senior staff member) at the organization to help facilitate social metric data collection and utilization.

Rad Resources:

  • Webinar with guest- Grameen Foundation (http://wdi.umich.edu/knowledge/multi-dimensional-impacts-enhancing-poverty-alleviation-performance-the-importance-of-implementing-multidimensional-metrics) on the value of capturing multi-dimensional poverty outcomes
  • Webinar with guest- Solar Aid (http://wdi.umich.edu/knowledge/enhancing-poverty-alleviation-performance-amplifying-the-voice-of-local-stakeholders) on qualitative methods to capture multi-dimensional poverty outcomes
  • Webinar with guest- Danone Ecosystem Fund (wdi.umich.edu/knowledge/quantitative-methodology-enhancing-poverty-alleviation-performance-quantifying-changes-experienced-by-local-stakeholders) on quantitative methods to capture multi-dimensional poverty outcomes
  • Written for UNICEF, this guide (http://devinfolive.info/impact_evaluation/img/downloads/Theory_of_Change_ENG.pdf) explains the Theory of Change tool and its use; and Better Evaluation shares this guide (http://www.betterevaluation.org/en/resources/guide/facilitators_sourcebook_theory_of_change) on how to conduct a Theory of Change workshop with a section on how to use the tool to select indicators

The American Evaluation Association is celebrating Business, Leadership and Performance (BLP)  TIG week. All posts this week are contributed by members of the BLP Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hello, my name is Erika Cooksey. I am employed as an internal evaluator at Cincinatti Children’s. I’ve been a member of AEA since 2011, and I co-chair the Social Work TIG.

A change in leadership can be a positive and stressful experience for the systems that it affects. A leader, new to an organization, brings new energy, ideas and priorities, including philosophies about how data should be used and presented. Internal evaluators with organizational history often have established and vetted methods of data collection and reporting. In a perfect world these methods would align with the needs of the new leader and they would move forward together in harmony. However, in most work environments a change in leadership requires change in the way evaluators do business. Navigating the waters of change can be complicated but establishing a good working relationship with new leadership early on makes working through change more manageable.

Hot Tip: Learn what they value

Meet with the new leader to gain an understanding what’s important to them. Assess where they are on the spectrum of understanding evaluation. Ask specific questions about their background; results and reports that were useful at their last job; how they used data to make decisions in the past; and what they need to know in the next 60-90 days to understand the work ahead. Learning more about their association with data will help assess the current state of your work and whether you should consider other methods of data collection and reporting.

Image of desk with laptop, calendar, pen, coffee, clock, etc.

Hot Tip: Approach change with an open mind

It’s important that evaluators critically assess the need for change when appropriate. This process requires time, focus and input from others. Create an environment that fosters open and honest discussion about your work. Positive feedback and accolades are great, yet it’s often the feedback that wasn’t easy to receive that is the most valuable.

Some behavioral tips for receiving feedback include: encouraging others to voice concerns and suggestions; supporting a questioning attitude and being a gracious recipient of feedback. If making a change is the right way to go, consider providing prototypes to the new leader and other stakeholders to get feedback. 

Rad Resource: Maintain professional standards

Refer to AEA’s Guiding Principles as you review your methods of evaluation. Maintain high standards and ensure that your data collection practices preserve credibility and integrity. Think critically about how changes in the evaluative model will impact the business. Make decisions regarding needed changes based on your analysis. AEA reminds us to “continually seek to maintain and improve their (our) competencies, in order to provide the highest level of performance in their evaluations.” While change is sometimes difficult, it can lead to a better and more efficient approach.

The American Evaluation Association is celebrating Business, Leadership and Performance (BLP)  TIG week. All posts this week are contributed by members of the BLP Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Sarah Stawiski

Sarah Stawiski

My name is Sarah Stawiski, and I am a Senior Researcher and Program Evaluator at the Center for Creative Leadership (CCL). At CCL, we provide leadership solutions to individuals, organizations and communities through programs, coaching, and other services. We make a promise to our clients – we will help them achieve results. The results that matter to our clients may range from becoming a more effective communicator, building stronger leadership networks in communities, or transforming an organizational culture. When evaluating these programs, results certainly matter, but to know best how to achieve these results, we have to remember that context also matters.

Train, Empower, Reward People diagram

When we work with individual leaders, after they complete a program or coaching engagement, what they do next is critical. Will they only remember the friends they made and the fun they had in the program, or will they actually go back and apply what they’ve learned to make meaningful changes in their leadership practices? There are many factors related to individual differences and program design that will determine how much of the learning experience “sticks.” However, the work environment they return to is very important and should not be left out of the equation. There is extensive literature on the importance of context when it comes to learning transfer in general. A review suggests there are multiple dimensions of work context related that have been empirically connected to the extent learners can apply what they learn and actually make lasting changes in their behavior (e.g., psychological safety, development climate, learning transfer climate, etc.). At CCL, one aspect of context that can influence the extent that a program “works” and leaders actually make positive changes to their behavior is supervisor support for development.

image of woman and man in work discussion and laptop computer

Rad Resource: My colleagues Steve Young, Heather Champion, Michael Raper and Phillip Braddy recently published a white paper called Adding More Fuel to the Fire: How Bosses Can Make or Break Leadership Development Programs (https://www.ccl.org/wp-content/uploads/2017/03/how-bosses-can-make-or-break-leadership-development.pdf) that shows that in some of our leadership programs, the more participants feel their development as a leader is supported by their supervisors back at work, the more they were able to apply what they learned and develop as a leader.

 

More recently, we have focused on other aspects of context that we know and/or suspect to be important to promoting the “stickiness” of learning such as the extent individuals support and challenge one another and the extent senior leaders are perceived as making leadership development a priority. Collecting this additional data allows us to have a more holistic conversation about what our clients can do to get the most benefit from their investment in leadership development.

 

Lesson Learned: Collecting data about context opens the doors to better conversations with clients about how to strengthen the effectiveness of leadership programs.

 

The American Evaluation Association is celebrating Business, Leadership and Performance (BLP)  TIG week. All posts this week are contributed by members of the BLP Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Jennifer Dewey, and I chair the Business, Leadership, and Performance TIG, working closely with Kristy Moster, the TIG’s Program Chair. We’re highlighting key issues important to our members this week.

These issues – leadership, business priorities, knowledge management, and workforce engagement – address categories in the Baldrige Excellence Framework (http://www.nist.gov/baldrige/publications/baldrige-excellence-framework) whose purpose is to help all organizations determine: 1) Is my organization doing as well as it could? 2) How do I know?, and 3) What and how should my organization improve or change?

The Framework includes the Criteria for Performance Excellence and Core Values and Concepts. The Framework promotes a systems perspective, i.e., managing all the components of an organization as a unified whole to achieve its mission, ongoing success and performance excellence.

Baldrige Excellence Framework

The Criteria for Performance Excellence includes an Organizational Profile that describes an organization’s background and sets the context for the methods used to accomplish work and resulting outcomes. The leadership process triad (Leadership, Strategy, and Customers) emphasizes a leadership focus on strategy and customers. The results triad (Workforce, Operations, and Results) includes workforce-focused processes, key operational processes, and the performance results they generate. “Integration” at the center of the figure indicates the system elements are interrelated. The system foundation (Measurement, Analysis, and Knowledge Management) is critical to a fact-based, knowledge-driven, agile system to improve performance and competitiveness. All actions lead to Results related to products and processes, customers,  workforce, leadership and governance, and financial and market outcomes.

Hot Tip: The Criteria does not prescribe how users should structure their organization or its operations. Through the Organizational Profile, users describe what is important such as their mission, vision and values; customer, supplier, and partner relationships; regulatory requirements, competitive environment, and strategic context.

A set of Core Values and Concepts, starting with a systems perspective that is supported by visionary leadership, support the Criteria. The next seven values are the hows of an effective system. The final two values, ethics and transparency and delivering value and results, are the outcome of using the Baldrige Excellence Framework.

Baldrige Excellence Framework.

Created by Congress in 1987, the Baldrige Performance Excellence Program is managed by the National Institutes for Standards in Technology (NIST), an agency of the Department of Commerce. The Criteria has expanded to include Business/Non-Profit, Education, and Healthcare industries.

Rad Resources:

The American Evaluation Association is celebrating Business, Leadership and Performance (BLP)  TIG week. All posts this week are contributed by members of the BLP Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

<< Latest posts

Older posts >>

Archives

To top