AEA365 | A Tip-a-Day by and for Evaluators

TAG | design

I’m Marissa Marzano, and I’m the Communications Specialist at the New Jersey Coalition Against Sexual Assault. I’ve sat through many a PowerPoint jam-packed with utterly fantastic information, only to leave a session and wonder, “What was that about again?”

You’ve crunched the numbers, you’re geeking out over your results… now how do you ensure your data sticks when you go out and present?

Good design is more than just pretty slides – in fact, I like to think of it as the last “to do” for any researcher. Research has shown us that stellar graphic design helps an audience store information in their long-term memories (where we tuck away facts until we need to recall them later) by drawing in their attention and making each piece easier to digest. Hence: no more users logging off your webinar, having a colleague ask “How was it?,” and having nothing to report because they can’t recall anything.

So, how do we marry great data with gorgeous design?

Hot Tip: GO BIG OR GO HOMEmarzano1

Our brains work visually – it’s why we salivate at Big Mac commercials and enjoy scrolling through Instagram more than Twitter. Think of the top lines from your data – the most important concepts that you want your audience to retain – and pair them with a striking visual that will boost recall. See the example to the right: the text refers to “building” relationships, paired with an image of someone building with sand. Try sites like Pixabay or Death to Stock Photo for some free, high-quality images to boost your presentation.

marzano2Hot Tip: BE ICONIC

Need a way to break down key point that won’t fit with an image? You can achieve a similar recall effect by pairing data with strong iconography – see the slide at left. You can access quality free icon packs online at flaticon.com, or if you’re feeling ambitious and have some time on your hands, create anything you want – you can find open-source and editable vectors through Creative Commons’ search function and jazz them up in Adobe Illustrator.

Lesson Learned: IT’S NOT LIKE COOKING…

…in that you don’t have to start from scratch. Sites like Piktochart or Canva offer a variety of marzano3templates for presentations, already sized perfectly to run on PowerPoint. They’re a great jumping off point where you can easily plug-and-chug pieces of data to fit into templates, instead of starting from scratch.

Remember to make sure your results put their best foot forward ­– and leaves a lasting impression!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Greetings to my fellow #DataNerds! My name is Jordan Slice. I am a Research Specialist at Richland One, an urban school district in Columbia, South Carolina. In addition to being a full-time evaluator, I create handmade pieces for my Etsy shop, resliced.

As a handmade business owner, many of the sales I make are custom orders. People really appreciate when something is tailored to meet their needs. The same is true for evaluation stakeholders: your results are much more likely to be appreciated (and used!) if they answer the questions your stakeholders need to know.

Lesson Learned: Whether I’m making a custom purse (that’s one of my bags to the right) or designing a program evaluation, clear communication is key. For example, if a customer sends me her grandfather’s favorite shirt and requests that I make her a purse using the fabric, it is imperative that we come to a clear agreement about the design of the purse before I start constructing. Similarly, when evaluating a program, it is imperative that you consult with the stakeholders before developing your evaluation if you expect the results to be utilized.

Hot Tip: Keep it simple. While you and I may love geek speak, flooding your stakeholders with evaluation jargon may impair their ability to understand your results. Whether you are talking with stakeholders, constructing a presentation, or writing a report, commit to the mantra that less is more. Once I gather my summary in writing, I use a two step revision process. First, I focus on organizing the content for better flow. Second, I put on my minimalist cap and cut out all the excess fluff (usually repetitive statements or unnecessary detail). Before finalizing any report, always ask a colleague (or stakeholder when appropriate) to proof and provide feedback. I employ the same technique when I am building newsletters (Rad Resource: Mail Chimp – free & user-friendly!) or item listings on Etsy.

Rad Resource: Stephanie Evergreen has some really great posts (like this one!) on her blog with tips for creating better visualizations with your data.

Another Hot Tip: Allow yourself time to focus on something creative (even just a daydream) several times a week. This can give your mind the break it needs to process information and improve your focus. Pursue a new hobby or build on an existing interest. You may be surprised at how this new skill can help you grow as an evaluator.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Feb/14

23

Cameron Norman on The Evaluator-as-Designer

You might not think so, but I think you’re a designer.

My name is Cameron Norman and I work with health and human service clients doing evaluation and design for innovation. As the Principal of CENSE Research + Design I bring together concepts like developmental evaluation, complexity science and design together for clients to help them learn about what they do and better create and re-create their programs and services to innovate for changing conditions.

Nobel Laureate Herbert Simon once wrote: “Everyone designs who devises courses of action aimed at changing existing situations into preferred ones”.

By that standard, most of us who are doing work in evaluation probably are contributing designers as well.

Lessons Learned: Design is about taking what is and transforming it into what could be. It is as much a mindset as it is a set of strategies, methods and tools. Designing is about using evidence and blending it with vision, imagination and experimentation.

Here are some key lesson’s I’ve learned about design and design thinkers that relate to evaluation:

  1. Designers don’t mind trying something and failing as they see it as a key to innovation. Evaluation of those attempts is what builds learning.
  2. When you’re operating in a complex context, you’re inherently dealing with novelty, lots of information, dynamic conditions and no known precedent so past practice will only help so much. Designers know that every product intended for this kind of environment will require many iterations to get right; don’t be afraid to tinker
  3. Wild ideas can be very useful. Sometimes being free to come up with something outlandish in your thinking reveals patterns that can’t be seen when you try too hard to be ‘realistic’ and ‘practical’. Give yourself space to be creative.
  4. Imagination is best when shared. Design is partly about individual creativity and group sharing. Good designers work closely with their clients to stretch their thinking, but also to enlist them as participants throughout the process.
  5. Design (and the learning from it) is doesn’t stop at the product (or service). Creating an evaluation is only part of the equation. How the evaluation is used and what comes from that is also part of the story because that informs the next design and articulates the next set of needs.

I write regularly on this topic on my blog, Censemaking, which has a library section where you can find more resources on design and design thinking. Design is fun, engaging and taps into our creative energies for making things and making things better. Try it out and unleash your inner designer in your next evaluation.

Clipped from http://censemaking.com/library/

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Cameron Norman and I am the Principal of CENSE Research + Design. My work brings together complexity science and design together with developmental evaluation into something I refer to as developmental design, which is about making decisions in the face of changing conditions.

Lesson Learned: At the heart of developmental evaluation is the concept of complexity and innovation. Complexity is a word that we hear a lot of, but might not fully know what it means or how to think about it in the context of evaluation.

For social programs, complexity exists:

… where there are multiple, overlapping sources of input and outputs

… that interact with systems in dynamic ways

… at multiple time scales and organizational levels

… in ways that are highly context-dependent

Rad Resources: Complexity is at the root of developmental evaluation. So for those who are new to the idea or new to developmental evaluation, here are 7 resources that might help you get your head around this complex (pun intended) concept:

  1. Getting to Maybe is a book co-written by our good friend Michael Quinn Patton and offers a great starting place for those working in community and human services;
  2. Patton’s book Developmental Evaluation (ch 5 in particular) is, of course, excellent;
  3. The Plexus Institute is a non-profit organization that supports ongoing learning about complexity applications for a variety of settings;
  4. Tamarack Institute for Community Engagement has an excellent introduction page including an interview with Getting to Maybe co-author Brenda Zimmerman
  5. Ray Pawson’s new book The Science of Evaluation is a more advanced, but still accessible look at ways to think about complexity, programs and evaluation;
  6. My blog Censemaking has a library section with sources on systems thinking and complexity that include these and many more.
  7. The best short introduction to the concept is a video by Dave Snowden on How to Organize A Children’s Party that is a cheeky way to illustrate complexity that I often use in my training and teaching.

Complexity is part theory, part science and all about a way of seeing and thinking about problems. It doesn’t need to scare you and these resources can really help get you in the right mind-frame to tackle challenging problems and use evaluation effectively as a means to addressing them. It might be complex, but it’s fun.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I’m John Nash and I’m an associate professor at the University of Kentucky in the department of educational leadership studies. I’m also co-founder of the OpenEye Group, a consultancy working in Europe and the States to enhance program impact in the social sector. Currently I’m blogging in several places in different capacities. I have a personal blog where I reflect on the topic of design thinking, I built a new blog and social network site for an AEA regional affiliate, and I contribute posts to a collaborative blog entitled Education Recoded over at BigThink.

Rad Resource – Evaluation Network for the Missouri River Basin: This is an example of how a volunteer organization can use blogging. Nine months ago we migrated from Google Sites to WordPress wherein our board members are posting about every two weeks according to an editorial calendar. The focus is on areas of passion regarding evaluation. We also created a social networking component for the site via BuddyPress. Our hope is to fulfill the mission of our far-flung membership by creating a space to create community online.

Rad ResourceReform By Design. This blog is my personal site for reflections on the topic of design thinking. Design thinking is a process by which one tackles problems or challenges using a human-centered perspective. Design thinking is a powerful way to re-frame issues so that program planning can be more effective and, ultimately, evaluations can be more useful.

Hot Tips – favorite posts:

  • The absence of design in organizational design: What gives?
    This post is a good example of how blog posts are good for drafting rough ideas. I was reading Managing By Design when a passage in a chapter by Carl Weick hit me like a thunderbolt – a flow of words befell me. Before I knew it I had a 1500 word reaction on my hands.

Lessons Learned – why I blog: I blog because I’m find it’s a safe space to explore ideas and obtain feedback.

Lessons Learned: One thing I’ve learned, and continue to be surprised by, is how many people out there are also interested in topics I like. I’m endeared and encouraged by every tweet, every “like,” and each comment from readers.

This winter, we’re running a series highlighting evaluators who blog. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Diane Dunet and I am a senior evaluator on the Evaluation and Program Effectiveness Team at the Centers for Disease Control and Prevention, Division for Heart Disease and Stroke Prevention. Our team members use a written purpose statement for our program evaluations.

In strategic planning, a mission statement serves as a touchstone that guides the choice of activities undertaken to achieve the goals of an organization. In evaluation, a purpose statement can serve as a similar touchstone to guide evaluation planning, design, implementation, and reporting.

Early in the evaluation process, evaluators on our team at CDC work with our evaluation sponsors (those requesting that an evaluation be conducted, for example a program manager) in order to understand and clarify the evaluation’s purpose. In many cases, the purpose of an evaluation is to improve a program. Other types of evaluation purposes include accountability, measuring effectiveness, assessing replicability of a program to other sites, determining what program components are essential, or making decisions about a program’s fate. We develop a written evaluation purpose statement and then refer to it during the entire evaluation process. An example purpose statement is:

The purpose of this evaluation is to provide an accountability report to the funder about the budgetary expenditures for client services delivered at 22 program sites. (Accountability.)

In the initial stages of evaluation, we are guided by the evaluation purpose when determining which program stakeholders should be involved in the evaluation in order to accomplish its purpose. We refer to the purpose statement to guide our evaluation design, seeking to match data collection methods and instruments appropriate to the evaluation purpose. We also use the evaluation purpose statement to guide us in tailoring our reports of evaluation results to align with the sponsor’s needs and the evaluation’s purpose.

Of course, evaluation findings can sometimes also be “re-purposed” to provide information in a way not originally intended, for example when program managers find ways to improve a program based on results of an evaluation for accountability.

Resource:  The CDC Framework for Program Evaluation in Public Health provides a six-step approach to conducting program evaluation and is available at http://www.cdc.gov/mmwr/preview/mmwrhtml/rr4811a1.htm

Resource:  The CDC Division for Heart Disease and Stroke Prevention sponsors a public health version of “Evaluation Coffee Breaks” modeled after the AEA Coffee Breaks. Information and archived sessions are available at http://www.cdc.gov/dhdsp/programs/nhdsp_program/evaluation_guides/index.htm

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi, I’m Jonny Morell and I’m looking for people interested in how agent based modeling can be combined with traditional evaluation methods.

For the past few years, I have been thinking and writing a lot about how evaluation can anticipate and respond to unexpected changes in programs. The difficulty as I see it is that many powerful evaluation designs have inherent rigidities that make it difficult to adapt them to new circumstances. For instance, there are designs that require well-validated psychometrically tested scales. There are designs that require maintaining boundaries among comparison groups. There are designs that require data collection (whether qualitative or quantitative) during narrow windows of opportunity in a program’s life cycle. There are designs that require carefully developed and nurtured relationships with a particular group of stakeholders. Many other examples are easy to find.

So, how can we keep these kinds of designs in our arsenal when there is a high probability that programs will change in such a way as to require a different evaluation design? Most of what I have been writing on this topic embeds specific data collection and research design methodologies in a theory that draws from elements of organizational behavior and complex adaptive systems. Any given specific method I advocate however, is well known and familiar.

Hot Tip: Lately I have been teaming with a computer scientist to test an approach that is less familiar in evaluation. He and I have been working on processes that will tightly integrate continual iterations of traditional evaluation with agent based modeling. Our hypothesis is that such integration will provide evaluators with leading indicators of program change. We have two contentions. First, that the longer the lead time, the greater the opportunity to adjust evaluation designs to changing circumstances. Second, that agent based modeling can provide information that will not come from other simulation methods.

Hot Opportunity: We are now hunting for evaluators who have access to ongoing or incipient evaluations who may wish to work with us. Don’t be shy. Send me an email at jamorell@jamorell.com.

Rad Resource #1: For in depth coverage of the ideas in this post, check out Morell J.A. (2010) Evaluation in the Face of Uncertainty: Anticipating Surprise and Responding to the Inevitable. Guilford Press.

Rad Resource #2: For more on these ideas, check out Morell J.A., Hilscher, R., Magura, S., and Ford, J. (2010) Integrating Evaluation and Agent-Based Modeling: Rationale and an Example for Adopting Evidence-Based Practices. Journal of Multidisciplinary Evaluation Vol 6, No 14. (http://survey.ate.wmich.edu/jmde/index.php/jmde_1/issue/view/30/showToc)

The American Evaluation Association is celebrating Systems in Evaluation Week with our colleagues in the Systems in Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our Systems TIG members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting Systems resources. You can also learn more from the Systems TIG via their many sessions at Evaluation 2010 this November in San Antonio.

· · · ·

My name is Sandra Eames, and I am a faculty member at Austin Community College and an independent evaluation consultant.

For the last several years, I have been the lead evaluator on two projects from completely different disciplines.  One of the programs is an urban career and technical education program and the other is an underage drinking prevention initiative.  Both programs are grant funded, yet; they require very different evaluation strategies because of the reportable measures that the funding source requires.  Despite the obvious differences within these two programs’ such as deliverables and target population, they still have similar evaluation properties and needs. The evaluation design for both initiatives was based on a utilization-focused (UF) approach which has universal applicability because it promotes the theory that program evaluation should make an impact that empowers stakeholders to make data grounded choices (Patton, 1997).

Hot Tip: UF evaluators want their work to be useful for program improvement, and increase the chances of stakeholders utilizing their data-driven recommendations.  Following the UF approach could avoid the chance of your work going on a shelf or in a drawer somewhere.  Including stakeholders in the early decision making steps is crucial to this approach.

Hot Tip: Begin a partnership with your client early on that will lay the groundwork for a participatory relationship and it is this type of relationship that will ensure that the stakeholder utilizes the evaluation. What good has all your hard work done if your recommendations are not used for future decision-making? This style helps to get buy-in which is needed in the evaluation’s early stages.  Learn as much as you can about the subject and intervention that they are proposing and be flexible.  Joining early can often prevent wasted time and efforts especially if the client wants feedback on the intervention before they begin implementation.

Hot Tip: Quiz the client early as to what they do and do not want evaluated, help them to determine priorities especially if they are under a tight budget or short on time for implementation of strategies.  Part of your job as evaluator is to educate the client on the steps that are needed to plan a useful evaluation. Inform the client that you report all findings both good and bad upfront might prevent some confusion come final report time.  I have had a number of clients who thought that the final report should only include the positive findings and that the negative findings should go to the place were negative findings live.

This aea365 contribution is part of College Access Programs week sponsored by AEA’s College Access Programs Topical Interest Group. Be sure to subscribe to AEA’s Headlines and Resources weekly update in order to tap into great CAP resources! And, if you want to learn more from Sandra, check out the CAP Sponsored Sessions on the program for Evaluation 2010, November 10-13 in San Antonio.

· ·

Hi, I’m Lihshing Leigh Wang, and I’m an Associate Professor of Psychometrics and Quantitative Methodology at University of Cincinnati. Today I want to share with you a tip about database design.

Evaluation research that involves large-scale, multi-level, multi-year, and multi-cohort data presents special challenges to evaluators. Most training programs and publication venues focus on the research design, data collection, and data analysis phases, but largely leave the database design phase out of the research cycle. This knowledge gap presents special obstacles in today’s climate that encourages interdisciplinary collaboration and systems integration to inform scientific discovery and policy decision making. Database design and management is being recognized as one of the priority research areas in the near future by many funding agencies, such as the National Science and Technology Council, National Institute of Health, and Institute of Education Sciences.

Hot Tip: A powerful web-based platform that supports complex database design and multi-site collaboration is SAS Enterprise Guide (http://www.sas.com/technologies/bi/query_reporting/guide/). Its graphical user interface provides visualization tools that users can easily navigate from multiple remote sites. Its centralized data repository warehouse collects data from distributed locations and controls data security in a hierarchical command structure. Its integrated analytics system provides seamless information flow in a shared framework that maximizes data transportability and minimizes data processing errors.

In a recent state-wide endeavor to examine teacher preparation accountability, we explored the causal relationships among three clusters of variables: one exogenous cluster (teacher education), one direct endogenous cluster (teacher quality), and one indirect endogenous cluster (student learning). The two endogenous clusters were repeated over seven years and collected from six cohorts at more than seventy sites. We used SAS EG as the shared platform for collaboration. The biggest challenge we encountered was political rather than technical—issues such as ownership of data collected through local sites but centrally deposited at the data warehouse. Another challenge we faced was linking multiple relational databases with unique identifiers, which again was a design issue rather than technical issue. Without a web-based platform such as SAS EG, conducting evaluation research on such a complex scale would be unimaginable.

·

Archives

To top