AEA365 | A Tip-a-Day by and for Evaluators

TAG | performance measurement

Hello, my name is Michel Laurendeau, and I am a consultant wishing to share over 40 years of experience in policy development, performance measurement and evaluation of public programs. This is the last of seven (7) consecutive AEA365 posts discussing a stepwise approach to integrating performance measurement and evaluation strategies in order to more effectively support results-based management (RBM). In all post discussions, ‘program’ is meant to comprise government policies as well as broader initiatives involving multiple organizations.

This last post discusses the creation and analysis of comprehensive and integrated databases for ongoing performance measurement (i.e., monitoring), periodic evaluation and reporting purposes.

Step 7 – Collecting and Analysing Data

Program interventions are usually designed to be delivered in standardized ways to target populations. However, standardization does not often take into consideration variations in circumstances that may affect the results of interventions, such as:

  • Individual differences (e.g., demographic and psychological factors);
  • Contextual variables (e.g., social, economic and geo-political factors/risks);
  • Program and institutional variables (e.g., type and level of services, delivery method, accessibility).

Focusing exclusively on program delivery (i.e., economy and efficiency) through the assessment of the achievement of delivery targets or compliance with delivery standards may be quite appropriate when programs are mature, and the causal relationships between outputs and outcomes are well understood and well established. But this is not always the case, and definitely not so when programs are new or in a demonstration phase (e.g., pilot project) and relying on uncertain or unverified underlying assumptions. In those situations, more robust and adapted analytical techniques should be used to measure the extent to which programs interventions actually contribute to observed results while taking external factors in to account. This is essential to the reliable assessment of program impacts/outcomes.

It is well known in econometrics that incomplete explanatory models lead to biased estimators because the variance that should have been taken by missing variables is automatically redistributed among the retained explanatory variables. Translated for evaluation, this means that excluding external factors from analytics creates a risk of incorrectly crediting the program with some levels of impacts that should instead have been attributed to the missing variables (i.e., having the program claim undue responsibility for observed results).

Dealing with this issue would require collecting appropriate microdata and creating complete data sets, holding information on all explanatory variables for each member of target populations, which can then be used to:

  • Conduct robust multivariate analysis to isolate the influence of program variables (i.e., reliably assessing program effectiveness and cost-effectiveness) while taking all other factors into account;
  • Explore in a limited way (using the resulting regressive model to extrapolate) how adjustments or tailoring of program delivery to specific circumstances could improve program outcomes;
  • Empirically assess delivery standards as predictive indicators of program outcomes (rather than rely exclusively on benchmarking) to determine requisite adjustments to existing program delivery processes.

Developing successful program interventions will require the evaluation function to successfully deal with the above challenges and more effectively support management decision-making processes.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello, my name is Michel Laurendeau, and I am a consultant wishing to share over 40 years of experience in policy development, performance measurement and evaluation of public programs. This is the sixth of seven (7) consecutive AEA365 posts discussing a stepwise approach to integrating performance measurement and evaluation strategies in order to more effectively support results-based management (RBM). In all post discussions, ‘program’ is meant to comprise government policies as well as broader initiatives involving multiple organizations.

This post discusses how to identify comprehensive sets of indicators supporting ongoing performance measurement (i.e. monitoring) and periodic evaluations, from which subsets of indicators can be selected for reporting purposes.

Step 6 – Defining Performance Indicators

When a logic model is developed using the structured approach presented in previous posts, and gets validated by management (and stakeholders), it can be deemed to be an adequate description of the Program Theory of Intervention (PTI). The task of identifying performance indicators then requires determining a comprehensive set of indicators that includes some reliable measure(s) of performance for each output, outcome and external factor covered by the logic model. In some cases, as discussed in the previous post, the set of indicators may extend to cover management issues as well.

Most performance measurement strategies (PMS) and scorecards also require the identification, for each output and outcome, of success criteria such as performance targets and/or standards, which are usually based on some form of benchmarking. This is consistent with a program design mode (i.e. top-down approach to logic models) based on inductive logic where each result is assumed to be a necessary and/or sufficient condition (as discussed in the TOC literature) for achieving the next level of results. This is however very limiting as it reduces the discussion of program improvement and/or success to the exclusive examination of performance in program delivery (as proposed in Deliverology).

Additional useful information that may be required includes the following:

  • Data type (quantitative or qualitative);
  • Data source (source of information for data collection);
  • Frequency of data collection (e.g. ongoing, tied with specific events, or at fixed intervals);
  • Data owner (organization responsible for data collection);
  • Methodology (any addition al information about measurement techniques, transformative calculations, baselines and variable definitions that must be taken into consideration in selecting analytical techniques);
  • Scales (and thresholds) used for assessing and visually presenting performance;
  • Follow-up or corrective actions that should be undertaken based on performance assessments.

Many organizations further require that performance measurement be designed with a view to address evaluation needs and adequately support the periodic evaluation of the relevance, efficiency and effectiveness of program interventions. However, evaluation and performance measurement strategies are most often designed separately, with evaluation strategies usually being arrested just before the actual conduct of evaluation studies. Evaluations are then constrained by the data collected and made available through performance measurement. In order for evaluation and performance measurement strategies to be coordinated and properly integrated, it would be necessary to develop them concomitantly at an early stage of program implementation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hello, my name is Michel Laurendeau, and I am a consultant wishing to share over 40 years of experience in policy development, performance measurement and evaluation of public programs. This is the fifth of seven (7) consecutive AEA365 posts discussing a stepwise approach to integrating performance measurement and evaluation strategies in order to more effectively support results-based management (RBM). In all post discussions, ‘program’ is meant to comprise government policies as well as broader initiatives involving multiple organizations.

This post discusses how the relation of performance measurement to results-based management should be articulated and incorporated into logic models.

Step 5 – Including the Management Cycle

Some logic models try to include management as a program activity leading to corporate results (e.g., ‘financial/operational sustainability’ and ‘protection of organization’) that are presented as program outcomes. Indeed, good management can help improve program delivery and thus contribute to program performance. However, that contribution is indirect and normally achieved through the ongoing oversight and control of program delivery (and the occasional revision of program design) with requisite adjustments to operational or strategic plans being informed by the ongoing measurement (or monitoring) and the periodic assessments of program performance (see Figure 5a).

Results-based management (RBM) then depends on the identification of relevant indicators and the availability of valid and reliable data to correctly inform players/stakeholders and adequately support management reporting and decision-making processes. The quality and use of performance measurement systems for governance is actually one of many elements of Management Accountability Frameworks (MAF) in the Canadian Federal Government, with other elements covering expectations regarding stewardship, policy and program development, risk management, citizen-focused service, accountability and people management. However, MAFs are the object of development and an assessment process that is totally separate from the one used for Performance Measurement Frameworks (PMF) based on delivery process models and/or logic models.

Indeed, the management cycle is relatively independent from actual program operations, with management standing in a relation of authority above program staff to provide oversight and control at each step of the delivery process (see Figure 4b in yesterday’s AEA365 post).

Trying to build the management cycle as a chain of results (or as a part of it) in a logic model is then totally inappropriate as it creates unnecessary confusion between management and program performance issues. Presenting the results of good management as program outcomes also blurs the distinction between efficiency (i.e., the internal capacity to deliver) and effectiveness (i.e., program impacts on target populations). Figure 5b below shows how to properly situate the management cycle in a logic model itself, essentially as an authoritative or facilitative process without direct causal links to specific program results.

This does not mean that management issues should be excluded from PMFs. Relevant indicators of management performance should also be identified for monitoring purposes whenever they are identified by management itself as internal factors or risks that do (or may) influence program delivery.

The next AEA365 post will discuss ways of addressing indicators and actual measures of performance.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hello, my name is Michel Laurendeau, and I am a consultant wishing to share over 40 years of experience in policy development, performance measurement and evaluation of public programs. This is the fourth of seven (7) consecutive AEA365 posts discussing a stepwise approach to integrating performance measurement and evaluation strategies in order to more effectively support results-based management (RBM). In all post discussions, ‘program’ is meant to comprise government policies as well as broader initiatives involving multiple organizations.

This post articulates the approach to the development of delivery process models that ventilate the individual activity-output sequences of a logic model in order to allow for the management oversight and control of program implementation.

Step 4 – Including Delivery Processes

Delivery process modelling can easily be accomplished by adapting a computer-assisted Integrated Definition (IDEF) methods developed in the ‘90s by the National Institute of Standards and Technologies (NIST). The IDEF 0 function modelling was initially based on authority links of controls ensuring that activities of multiple players are coordinated and undertaken only when proper authorities have been issued. In the IDEF 0 model, inputs and support mechanisms are used to perform operations and produce outputs through operations or transformation processes that are subjected to management controls and oversight (see Figure 4a). These nodes can be detailed further (by digging into the steps of each function) and/or sequenced to provide an exhaustive view of all program operations.

The delivery process model presented in Figure 4b is an adaptation of the IDEF 0 approach that is achieved by redefining operations and mechanisms as successive sub-activities supported by various players and stakeholders, with distinct products that are delivered at each step of production and that are subject to direct management authority (i.e. oversight and control). The final step is then the one that actually generates the product (or service) that is being identified as the output of a specific activity in the logic model.

In this modelling approach, inputs are not used to define consumed resources (e.g. funds and human resources), but rather to identify the mechanisms and sources of support for program delivery. Further, since delivery is done in a strict stepwise manner, the strong conditionality between sub-activities can actually be redefined as dependencies. The model also makes it possible to take into account internal and external factors/risks that can have an influence on delivery at each step of production.

The above model is essentially a working tool to help analysts validate their understanding of delivery processes. In performance measurement frameworks, the description of the process models can actually be limited to the sequence of operations and their related products in the narrative of each activity of the logic model, with a view to support the identification of indicators for monitoring purposes.

Figure 4b already suggests how to position the management cycle in relation with program delivery processes, and how best to articulate management issues from a program perspective. The next AEA365 post will address these features in greater detail.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello, my name is Michel Laurendeau, and I am a consultant wishing to share over 40 years of experience in policy development, performance measurement and evaluation of public programs. This is the third of seven (7) consecutive AEA365 posts discussing a stepwise approach to integrating performance measurement and evaluation strategies in order to more effectively support results-based management (RBM). In all post discussions, ‘program’ is meant to comprise government policies as well as broader initiatives involving multiple organizations.

This post articulates how logic models should be structured when program designs include multiple strategic elements (or program activities) supporting a common objective.

Step 3 – Addressing Conditionality

Program interventions rarely rely on a single product or service to achieve intended results. In fact, program strategies are most often designed using multiple interventions from one or more players. In these situations, there normally exists some conditionality between separate program activities as they support and interact with each other. Addressed this way, the notion of conditions (also used in the TOC literature) allows structuring logic models by properly sequencing the multiple program interventions (i.e. converging results chains) deemed to contribute to a common final result that is specific to the program.

To an outside observer being exposed to multiple interventions, program activities may appear to be delivered in a sequential manner (from left to right) based on some observable results (e.g., outputs or immediate outcomes) until some final outcome is achieved (see Figure 3a). This would be the case of a person arriving at a hospital emergency or an employment centre and being subjected to a series of treatments or services.

However, from a program perspective, all activities are actually implemented in parallel with different clients and/or players. In the examples of the hospital emergency and the employment centre, it is the clients who are moving from left to right across activities as they are exposed to various program services. In programs that reach clients only indirectly (e.g., environmental programs or economic policies), it is rather the projects or client files that are shifting across activities while being processed and/or subjected to various program interventions.

Conditionality then allows taking into account the relationships between the strategic elements (or activities) of program interventions without the need to clutter the logic model with an exhaustive mapping and display of all possible interactions and feedback processes. Implicitly, all program activities are (or may be) influenced to some extent by previous activities situated at the left of the diagram (see Figure 3b). Thus, when conditionality exists and is properly taken into consideration, the positioning of program activities in the logic model becomes important for the description and understanding of the program theory of intervention (PTI).

The next AEA365 post will dwell further into program implementation and discuss how to best integrate delivery processes into logic models in order to effectively support management oversight and control.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hello, my name is Michel Laurendeau, and I am a consultant wishing to share over 40 years of experience in policy development, performance measurement and evaluation of public programs. This is the second of seven (7) consecutive AEA365 posts discussing a stepwise approach to integrating performance measurement and evaluation strategies in order to more effectively support results-based management (RBM). In all post discussions, ‘program’ is meant to comprise government policies as well as broader initiatives involving multiple organizations.

This post presents the approach to the development of result chains and their integration within a Theory of Change (TOC) from a program perspective.

Step 2 – Developing the Program Theory of Intervention (PTI)

Program interventions are best modeled using chains of results with a program delivery (activity – output) sequence followed by an outcome sequence linking outputs to the program’s intended result (final outcome). Most models use only two levels of outcomes, although some authors advocate using as many as five. However, three levels of outcomes would seem to be optimal as it allows properly linking chains of results to broader TOCs, with the link being made through factors (immediate outcomes) that influence behaviors (intermediate outcomes) in target populations, in order to resolve the specific societal issue (final outcome) that has given rise to the program (see Figure 2a).

 

In chains of results, outputs are the products delivered by the program (as well as services, through a push-pull approach) that reach target populations, marking the transition between the sequence controlled by the program (i.e. program control zone) and the sequence controlled by recipients (i.e., influence zone of the program).

Logic models developed using this approach help clarify how the program intervention is assumed to achieve its intended results (i.e., the nested program theory of intervention) under the conditions defined in the broader TOC (see Figure 2b).

Developed this way, logic models do resolve a number of issues:

  • The models provide a clear depiction of the chains of results and of the underlying working assumptions or hypotheses (i.e. salient causal links) of the program interventions and of their contribution to a common final result that is specific to the program;
  • The models provide the basis to identify comprehensive sets of indicators supporting ongoing performance measurement (i.e. monitoring) and periodic evaluations, from which a subset can be selected for reporting purposes;
  • Indicators can also cover external factor/risks that have (or may have) an ongoing influence on program results and that should be considered (i.e. included as control variables) in analyses to obtain more reliable assessments of program effectiveness.

However, developing a logic model that is a valid representation of program theories of interventions is easier said than done. The next AEA365 post will offer some suggestions for achieving that goal. Further, since logic models focus heavily on program outcomes, they provide very little information on delivery processes in support of management oversight and control. Subsequent posts will be discussing how program delivery can be meaningfully addressed and properly integrated in program theories of intervention.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · ·

AEA365 Curator note: Today begins a special theme week with an extended (7 day) series on one topic by one contributing author. 

Hello, my name is Michel Laurendeau, and I am a consultant wishing to share over 40 years of experience in policy development, performance measurement and evaluation of public programs. This is the first of seven (7) consecutive AEA365 posts discussing a stepwise approach to integrating performance measurement and evaluation strategies in order to more effectively support results-based management (RBM). In all post discussions, ‘program’ is meant to comprise government policies and broader initiatives involving multiple organizations.

Step 1 of 7 – Developing the Theory of Change (TOC)

Effectively addressing an issue normally requires first understanding what you are dealing with. Models are generally used in evaluation to help clarify how program are meant to work and achieve intended results. However, much confusion exists between alternative approaches to modelling, each based on different ways of representing programs and the multiple underlying assumptions on which their interventions are based.

Top-down models, such as the one presented in Figure 1a, usually provide a narrow management perspective relying on inductive logic in order to select the evidence (based on existing knowledge and/or beliefs) that is necessary to support ex ante the strategic and operational planning of program interventions. Assumptions are then entirely about whether the program created necessary and/or sufficient conditions (as discussed in the TOC literature) for achieving intended results. In this context, the role of ex post evaluation is too often limited to focusing on program delivery and vindicating management’s contention that observed results depend to some (usually unknown) extent on existing program interventions.

As a research function, evaluation should also support (re)allocation decisions being made by senior government officials regarding the actual funding of public programs. However, this stronger evaluation role would involve reliably assessing individual program contributions to observed results in a given context, and require properly measuring real/actual program impacts while taking external factors into account.

The first difficulty in achieving this task is recognizing that Randomized Control Trials (RCT) are rarely able to completely eliminate the influence of all external factors, and that the statistical ‘black box’ approach it uses prevents reliably transposing (i.e., forecasting by extrapolating) observed results to situations with varying circumstances. Generalization is then limited to a narrow set of conditions formulated as broad assumptions about the context in which the program operates. Providing a more extensive base to reliably measure program effectiveness would entail, in a first step:

  1. developing more exhaustive Theories of Change (TOC) including all factors that created the need for program interventions and/or that likely have an influence on the issue or situation being addressed by the program; and,
  2. determining which factors/risks within the TOC are meant to be explicitly ‘managed’ by the program, with all others becoming external to the program intervention.

Figure 1b shows what a program logic model would normally look like at the end of this first step.

The next AEA365 post will articulate the approach to the development of the more detailed Program Theory of Intervention (PTI) that is imbedded within the broader TOC.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

We are Wanda Casillas and Heather Evanson, and we are part of Deloitte Consulting LLP’s Program Evaluation Center of Excellence (PE CoE). Many of our team members and colleagues are privileged to work with a variety of federal agencies on program evaluation and performance measurement and, throughout this week, will share some of their lessons learned and ideas about potential opportunities to help federal agencies expand the value of evaluations.

This week members of our team will share lessons learned about working remotely on federal evaluations, the use of qualitative methods in federal programs that don’t always appreciate the value of mixed methods, the potential for federal programs to be more “selfish” in program planning, the value of conducting evaluation and performance measurement for federal programs, and making the most out of data commonly collected in federal programs. In the coming weeks, readers will find an additional article on scaling up federal evaluations.

Lesson Learned: Many federal clients use performance measurement, monitoring, evaluation, assessment, and other similar terms interchangeably; however, evaluators and clients don’t always have the same definitions, and therefore expectations, in mind for what these terms mean. It’s important to learn as much as possible about your federal client’s experiences and history with evaluation through research and conversations with relevant stakeholders in order to make sure you can deliver on a given agency’s needs.

Lesson Learned: Clients sometimes see evaluation or performance measurement as a requirement rather than an opportunity to understand how to improve upon or expand an existing program. As evaluation consultants, we sometimes have to work with clients to help them understand how evaluation can benefit them even after responding to a request for proposals.

Rad Resource: Alfred Ho provides some intriguing insights on the effects of the Government Performance and Results Act of 1993, which has resulted in many of the performance measurement and evaluation activity we see today in GPRA after a Decade: Lessons from the Government Performance and Results Act and Related Federal Reforms.

The American Evaluation Association is Deloitte Consulting LLP’s Program Evaluation Center of Excellence (PE CoE) week. The contributions all this week to aea365 come from PE CoE team members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi, we’re Eva Bazant, evaluation staff at Jhpiego, and Vandana Tripathi, consultant to global public health programs. Jhpiego is an affiliate of Johns Hopkins University in Baltimore, Maryland, working to improve maternal, newborn and child health globally.

In many sectors, such as education, observations are used for professional evaluation. We are sharing lessons learned from an experience of using structured observation in evaluation of health care quality offered on the day of birth in low-resource settings (our experience was in Madagascar’s hospitals), carried out by the Maternal and Child Health Integrated Program of USAID.

Lessons Learned:

  1. Build trust of the individuals being observed and the professionals in charge to allow the observation to happen. Rely on a respected senior colleague to negotiate entry for observers.  Communicate clearly how the data will be used and kept secure, and to whom findings will be disseminated.
  2. Build in enough time to train and standardize observers’ competencies in observation; this can help identify potential challenges with the observation process and tools. Train observers to be a “fly on the wall” and stay long enough to allow employees to feel at ease and act normally, thereby reducing the Hawthorne effect.
  3. Use the shortest checklist/tool needed to cover important topics, to reduce error and fatigue; Validate the tool with topical experts prior to use, and pretest in the field.
  4. Create clear response categories to minimize ambiguity and need for interpretation by observers. Clarify for observers the distinctions between “not observed”, “not done” and “not applicable.”
  5. Interview your observers at the end, and communicate frequently during the process, to document how the observer tools were used. Review cases for completeness and discuss missing data.
  6. Use technology when possible (e.g., smart phone data entry) to increase efficiency in data entry. Ensure observers are comfortable using and maintaining the technology.
  7. Triangulate data from multiple sources to affirm and contextualize observation findings. Observation findings can be compared with interview or inventory data.

Lesson Learned Highlight – Improve validity and inter-rater reliability: During observer training, carry out one or more exercises to promote consistency of data. Have a trainer perform a complex service, omitting key steps or performing some mistakes, and have the observers record what they see. Compare the results to the “answer key” provided by the trainer. Look for common errors, and remediate with additional training of observers.

Many sectors and disciplines use observation in evaluation. We are interested to hear your experiences and comments regarding challenges and solutions.

Rad Resource – Handouts from Evaluation 2012Our evaluation 2012 roundtable handout expands on this topic.

The American Evaluation Association is celebrating Best of aea365, an occasional series. The contributions for Best of aea365 are reposts of great blog articles from our earlier years. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello! I’m Tony Fujs, Director of Evaluation at the Latin American Youth Center, a DC based non-profit organization. When asked about the role of my department, I often respond that it aims to be “the GPS unit of the organization”, showing decision-makers whether the organization is on track or not toward achieving its goals.

I like the GPS analogy because it is simple and easy to understand. It also provides an interesting framework to think about performance measurement systems, and how to improve them.

Nonprofits Performance measurement systems

Where we are:                                                                                                                                                                Where we want to be:

Image credit: Biblioteca de la Facultad de Derecho y Ciencias del Trabajo

Image credit: Biblioteca de la Facultad de Derecho y Ciencias del Trabajo

Image credit: Tony Fujs

Image credit: Tony Fujs

 

 

 

 

 

 

 

 

 

 

Let’s look at a few concrete examples to illustrate this point:

Fujs table

Lessons Learned: On data collection.

Recognize that data collection is always a burden: Collect only what is needed. Eliminate manual data entry whenever possible; if not possible make the user interface as intuitive as possible.

On data processing. Automate, automate, automate: Internal evaluators generally work with different instances of the same data sets, therefore data cleaning and other analytical tasks can easily be automated: Use programming tools like Excel macros or R. Modern databases can also be customized to automatically “catch” data entry errors.

Hot Tip: Want to learn more about efficient data processing? I’ll be running a workshop on data management at the next EERS conference.

On providing actionable information

Make sure the information generated by the performance measurement system is useful and understandable for the end user.

Make evaluation results hard to ignore: For instance, they could be displayed on a giant TV screen in the hall of the organization building, so nobody can enter the building without seeing them.

Simple is beautiful

Building a culture of data is often cited as a critical step in generating buy-in toward performance measurement systems. It is a critical step indeed, but partly because performance measurement systems are often perceived as complex and cumbersome by the end-user. Drivers adopted the GPS because it is useful and easy to use, not because they developed a culture of data. Building useful, simple, and intuitive performance measurement systems can also be a powerful and sustainable strategy to generate buy-in.

The American Evaluation Association is celebrating Eastern Evaluation Research Society (EERS) Affiliate Week. The contributions all this week to aea365 come from EERS members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Older posts >>

Archives

To top