AEA365 | A Tip-a-Day by and for Evaluators

CAT | Systems in Evaluation

I’m Katrina Brewsaugh, Senior Associate with the Annie E. Casey Foundation.  As part of Social Work TIG week on aea365, I’d like to tell you why social workers are an asset to an evaluation team.  Evaluation may not be the first thing that comes to mind when you think of social work, but the training we receive can be an asset to evaluation teams in several ways.

Lesson Learned:  Social workers are systems thinkers.

Ecological systems theory is at the core of social work training, from the bachelor’s degree through the doctorate.  A social worker on your team can identify issues and concerns that impact the micro- (family), mezzo- (community), and macro- (policy) levels, regardless of whether the evaluation is localized to one small program or has a national policy reach.

Lesson Learned:  Social workers are interdisciplinary.

Social work synthesizes knowledge of related fields such as psychology, sociology, education, public health, and even economics.  On interdisciplinary evaluation teams, a social worker can be an excellent translator and present a unified voice.

Lesson Learned:  Social workers are hands on.

All social workers must complete training in the field working with clients.  If you need someone skilled in engaging and interviewing consumers and stakeholders, then you need a social worker!

Rad Resources:  Here are some places to begin your search for a social work evaluator:

  • Search AEA’s Find an Evaluator list using ‘social work’ or ‘MSW’ as a search term.
  • Contact your local university’s social work program.
  • Contact a member of the Social Work TIG (through the “members only” link on www.eval.org).
  • Look for social workers on EvalTalk or AEA’s LinkedIn page.

So the next time you need to bring together an evaluation team, consider a social worker!

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our SW TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Greetings, I am Dawn Henderson, a faculty member in the Department of Psychological Sciences at Winston-Salem State University and a Research Associate at the Center for Community Safety. Program evaluation is a major component of the work that I do and, recently, I have been struggling with the concept of culturally responsive evaluation (CRE). Much of the literature on CRE has focused on the unique cultural traditions and rituals of communities, which is often defined by their ethnic/racial identity. Although this is a major tenet of CRE,  I have not had the privilege to read how CRE exist when there is a culture of the organization. So, what happens when you are trying to develop CRE around a target population but it may be impeded by the culture of the organization?

Lesson Learned: As a community psychologist I have been trained to view everything as a complex system, which are systems embedded within other systems. I have used this approach in getting organizations to first think about a culture of evaluation with the anticipation they will eventually move towards CRE.

Hot Tips: Here a three things to think about:

  • Bound the system. Draw the ecological model (Bronfenbrenner, 1977).  You really should first understand the culture of the organization before you engage in CRE with target populations. If you target population is at the center, then who are the stakeholders in the organization and what are their current practices and rituals? What external factors govern and drive their programs?

Henderson 2

  • Get to know the evaluation culture. If your intentions are to get the organization to CRE then it is important to understand their perspective on evaluation. You should think about targeted the primary members, such as directors, coordinators, the volunteer staff.  Ask questions to members like: How do you measure your work? How do you know what you are doing makes a difference? What tools or evidence do you use to say “we are working”?
  • The importance of adaptation. Evaluation requires us to take information and findings and “feed it” back to a system, organization, agency, etc. Evaluators have to be willing to adapt to their environment just as much as anyone else. Therefore, sometimes be willing to adapt your agenda to empower the culture of the organization to think about evaluation and its use.

Through the culmination of this process, then you can reengage the organization in moving towards CRE.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello, I am Samuel Otoo, manager of the Capacity Development and Results unit at the World Bank Institute (WBI).

Since 2009, our team has been developing a body of knowledge around the results of capacity development initiatives. This work is built around the Capacity Development Results Framework (CDRF), and focuses on capacity development as a process of empowerment for local agents in order to advance societal, policy and organizational change initiatives.  The overarching objective is to improve the effectiveness of the World Bank’s development assistance, which increasingly focuses on capacity development and other knowledge services.

 Lessons Learned:

  1. Consistent Framing:  The CDRF and associated tools provide a systematic approach for designing robust yet flexible capacity development strategies and programs, monitoring and adaptively managing interventions, and for evaluating their results.  WBI uses the CDRF to test program logic, and to measure, analyze and report results.  A particular feature of the framework is the emphasis on monitoring and managing intermediate level outcomes.
  2. Triangulation:  Given the complexity inherent in most institutional change processes, WBI emphasizes triangulation of evidence and results information from a variety of sources and instruments. Our results database brings together self-ratings by task team leaders, narrative data about outcomes achieved, and results of client feedback surveys.  The database enables us to triangulate data from these sources with outcome mapping which requires multiple perspectives to corroborate reports.   The key infrastructure is captured by the below diagram:

Capture

Rad Resources:

See this overview and paper on the Capacity Development Results Framework, which includes intermediate capacity outcomes and institutional capacities.    See also this guide on evaluating results of capacity development, and an analysis of nine case studies on how strategic capacity development can strengthen local ownership of development goals.

Interested in learning more?  Attend my panel discussion entitled  An Integrated Approach to Results Management in a Knowledge Organization – The Experience of the World Bank Institute with colleagues Dawn Roberts, Jenny Gold, Joy Behrens and Violaine Le Rouzic at the upcoming AEA conference.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from theAmerican Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Samuel? He’ll be presenting as part of the Evaluation 2013 Conference Program, October 14-19 in Washington D.C.

Hello! We are Tarek Azzam (Claremont Graduate University) and Matt Keene (Environmental Protection Agency). We are members of the External Review Panel for ECLIPS.

Whoa! Even though it’s oh-so-tempting to try, you don’t need the perfect string of words to define “systems-oriented evaluation.”

Lesson Learned: It’s in the roots of evaluation 

At its core a systems approach to evaluation encourages the evaluator to consider the physical, political, and structural issues that surround a program, and to examine how these factors help or hinder the success of a program. This examination and reflection process is incorporated in the work and writings of lead evaluation scholars such as Lee Cronbach, Robert Stake, and Jennifer Greene. The presence of systems thinking also can be seen in our standards (specifically standards F3, A4, and A7).

And it’s also something different, because it requires the evaluator to recognize that the program is part of systems that have their own dynamics. It forces the evaluation to examine issues that go beyond the process and outcomes of a single program.

Lesson Learned: How to become a systems-oriented evaluator

1)      Adopt some habits of systems thinkers

ECLIPShabits

2)     Know the domains of Social Ecology and use them to understand the leverage points of boundaries, relationships, and perspectives. Donella Meadows says that boundaries are problem dependent and messy. Don’t make the world linear for your mathematical or administrative convenience.

ECLIPSdomains

3)     Delve into the dynamics of systems to find the regions of organized, adaptive, and unorganized patterns.

ECLIPSadaptive

4)     Find leverage points (places to intervene where small tweaks can lead to big changes). Here are some leverage points from Meadows you can use to influence relevant systems:]

ECLIPSpoints

5)     Let systems thinking do fuzzy things to your logic model. A fuzzy logic model takes into account the dynamic nature of the systems surrounding a program. It gives a visual image of the complexities that can affect processes and outcomes.

In the ECLIPS, all of our logic models used to look like this:

ECLIPSfuzzy

But after applying systems thinking, we made them into fuzzy logic models. See how different they look.

ECLIPSexamples

Try creating a fuzzy logic model to find and depict the system’s complexity, making your logic model more useful to people for a longer time.

That’s all for now

This concludes ECLIPS week on aea365! Don’t expect to learn about systems alone or in a short period of time. It may well be a journey into a new way of thinking about evaluation. Get involved in an existing community of practice about systems or form your own group. ECLIPS members are happy to share what they are doing.

The American Evaluation Association is celebrating this week with our colleagues involved in ECLIPS—Evaluation Communities of Learning, Inquiry, and Practice about Systems—and the AEA Systems in Evaluation Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! We are Karen Peterman (Karen Peterman Consulting, Co.) and Marah Moore (Director of i2i institute and ECLIPS coPI). We are going to talk about some of the implications of a systems approach for the role of the evaluator.

In a recent Thought Leaders Forum, Michael Patton reminded us that evaluation is a transdisciplinary field. Evaluators need expertise in evaluation theory, practice, methods and use, as well knowledge of theories of change and how to generalize what they have learned about patterns in effective interventions. A systems approach to evaluation can enhance evaluators’ work across each of these areas of expertise.

I (Karen) found the ECLIPS’ biggest impact on me was in how I view my overall role as an evaluator. And I (Marah) found that the evaluator’s role was revisited throughout the process of addressing each new systems concept. Here are some ideas we came away with:

Lesson Learned: Use your “systems-based evaluation expertise” to add value for clients.

Systems concepts can provide a valuable lens through which to view project evaluation findings. For example, you can help clients consider their individual projects within the context of academic literature, their institution’s larger mission, and/or their funders’ goals.

To guide these discussions, check out ZIPPER, A System-Based Evaluation Mnemonic and the Systems Archetypes.

Lesson Learned: Use a systems approach to push evaluation beyond the traditional stages.

The ECLIPS graphic below shows a traditional and a systems view of the stages of evaluation.

ECLIPScomparison

The systems view encourages the evaluator to work at the intersection of the traditional stages. The evaluator asks clients and participants to help shape:

  • the evolution of the evaluation plan,
  • the data collection procedures, and
  • the interpretation of results.

Bringing together a systems orientation and participatory evaluation leads to an evolution in evaluation practice.

Hot Tip: Break down the silo approach to evaluation.

At AEA last fall, a number of presenters stated that the purpose of their work was to improve education. The systems perspective provides methods to start thinking about and achieving that goal on a broader scale. It enhances the thinking and work of evaluators by providing a framework and tools to move beyond the evaluation of the immediate project to start challenging clients (gently!) to think about system-level change and how their projects really can make a big-picture difference.

Read the final ECLIPS blog tomorrow where Tarek and Matt give an overview of systems- oriented evaluation and look at some fuzzy logic models.

Rad Resources:

The American Evaluation Association is celebrating this week with our colleagues involved in ECLIPS—Evaluation Communities of Learning, Inquiry, and Practice about Systems—and the AEA Systems in Evaluation Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello, we are David Reider (Education Design, Inc.), Ginger Fitzhugh (Evaluation and Research Associates), and Alyssa Na’im (Education Development Center, Inc.). As ECLIPS members, we are incorporating systems concepts into STEM education evaluations related to the National Science Foundation program, Innovative Technology Experiences for Students and Teachers (ITEST).

Keying off the iceberg diagram (see Monday’s post), we go deeper into a system to find leverage points for change by considering:

  • boundaries (demarcations that define regions or entities)
  • relationships (connections and exchanges between project parts or people)
  • perspectives (the paradigms held by various parties and the purposes they seek)

Hot Tip: Don’t disregard the simple in a complex setting.

I (David) am evaluating a project using science probes and models in K-12 classrooms in four states. Although there were vastly different support structures in the sites, one of the lessons I learned in my evaluation was quite simple. The more frequently teachers posted to the online learning platform (thus reaching beyond the boundaries of their classroom), the higher the quality of their classroom projects. This was a case of leveraging small actions toward larger gains.

Hot Tip: Ask about boundaries, relationships and perspectives.

Questions That Matter has terrific examples of evaluation and interview questions that relate to boundaries, relationships and perspectives. I (Ginger) added several of these questions to our interview protocols for project leaders. For example, we added, “What, if any, unanticipated outcomes (positive or negative) have happened in the project thus far?” We learned that parents were interested in obtaining the project equipment to use with their children. This spurred the project team to consider how to make the materials more widely available.

Hot Tip: Consider how program goals can be leveraged.

I (Alyssa) am now paying more attention to acknowledging and identifying the boundaries, perspectives, and relationships for both program implementation and evaluation. Programs express their perspective through their statement of purpose, e.g., improving the nation’s STEM workforce development capacity. Systems thinking helps us understand why their strategies to accomplish this purpose may overlap or diverge and to see possible leverage points for change.

Hot Tip: Don’t lead with systems language; find ways to include it.

It’s not always necessary to say, “This is a systems idea.” Rather, use familiar language to explain how what you are doing adds value to the evaluation and the project.

Join us again tomorrow as we move away from the Iceberg diagram to consider the role of evaluators.

Rad Resources

The American Evaluation Association is celebrating this week with our colleagues involved in ECLIPS—Evaluation Communities of Learning, Inquiry, and Practice about Systems—and the AEA Systems in Evaluation Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! We are Lloyd Bingman (Brighter Day Consulting, LLC and evaluator of an NSF STEM project) and Pat Jessup (InSites associate and coPI for ECLIPS). We’re here to talk about the “Patterns” portion of theiceberg diagram displayed on Monday of ECLIPS week.

Both of us evaluate programs that are part of complex social systems that have multiple and dynamic patterns. An application of complexity theory provides us with a way to understand and distinguish among three patterns of system dynamics:

  • Organized patterns: With high agreement and high certainty in the system, the patterns appear fairly predictable.
  • Self-organizing: With a middle range of agreement and certainty, the patterns are adaptive.
  • Unorganized patterns: With low agreement among key players and low certainty, patterns may not be present.

systemdynamics

Creating a Robot Diagram to Understand Coordinator’s Role in a Complex System

When we talked about systems dynamics in the ECLIPS, I (Lloyd) discovered a systems lens for evaluating a project I was working on. I envisioned a key project leader superimposed on the diagram of systems dynamics. The leader is attempting to connect all of the project parts with their different dominant patterns of movement.

ITapprenticeship

In this illustration, the “robot” figure is the IT Apprenticeship Coordinator. The Coordinator brings all the pieces together to meet program goals. The numbers on the robot correspond to different program goals.

At the bottom left (#1), the goal of presenting the apprenticeship program is controlled and organized; the Coordinator has direct control of presentations. The other goals on the left side (#3, 5) tend to be fairly predictable but key players do not agree as much on these goals as on the #1 goal.

On the right side of the robot, the activities related to the three goals are more self-organizing or unorganized. The  college recruitment process goal (#2), bottom right side, is less certain than #1 because the college competes with other colleges at recruitment events. Moving up the right side, the approaches to reaching the goals (#4, 6) are increasingly unorganized.

At the head of the diagram (#7), the Coordinator is constantly planning, assessing, and implementing activities to ensure project success.

Insights Gained from the Robot Diagram

This image provided new insight into the relationships among the parts and where changes could lead to a more networked flow of information. For example, by strengthening the relationship with the state liaison (#4), I (Lloyd) was able to gain data on the state workers’ experiences with the apprenticeship program.

Join us tomorrow when Dave, Ginger, and Alyssa talk about identifying leverage points for changing a system.

Rad Resources

The American Evaluation Association is celebrating this week with our colleagues involved in ECLIPS—Evaluation Communities of Learning, Inquiry, and Practice about Systems—and the AEA Systems in Evaluation Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! I’m David Hata, an independent consultant who serves as an external evaluator for a number of NSF-funded Advanced Technological Education (ATE) projects and centers.

I’m here to share my experience helping a STEM project see itself as part of a system that creates value. For some clients, success is doing a lot of activities. The clients may not be thinking about what value those activities create.

My early evaluations focused on evaluating individual activities using methodologies such as described in Kirkpatrick’ book Evaluating Training Programs: Four Levels. With my recent participation in ECLIPS and the Synergy Project, I have started to think of evaluation more holistically in terms of systems and value creation. I now view NSF ATE projects and centers as systems that create value at multiple places within their structure. By recognizing these value-creation systems, I am discovering more ways that evaluation can help fulfill the mission of the ATE program at NSF—to increase the number of and strengthen skills of technicians for the workforce through the implementation of workforce development initiatives.

Lesson Learned: Finding Value in the Interconnections  

A useful conceptual framework for assessing value creation has been developed by Etienne Wenger and colleagues.

They outline five types of value: immediate, potential, applied, realized, and reframing. I used the first four types of value to help develop a road map for The Southwest Center for Microsystems Education (SCME), an ATE regional center at the University of New Mexico in Albuquerque. Working with my client to create the road map helped them develop a systems view of their project. The road map shows the connections between grant activities and a career pathway that produces advanced technicians for the U.S. workforce.

The diagram below shows how SCME activities fit into a career pathway from high school to community college to workforce.

scme

Center activities can be viewed as value investments:

  • immediate value: knowledge, skills, and materials gained by each participant;
  • potential value: what each teacher plans to do with their new knowledge, skills, and materials;
  • applied value: what changed in classroom instruction and student learning;
  • realized value: number of microsystem technicians produced based on graduate data.

The evaluation measures the value created by these investments at different points in the system. The links in the diagram emphasize the nature of the exchanges. Understanding the links between the boxes is as important as defining the activities and outcomes in the boxes.

Check out the following resource and join us tomorrow as my fellow ECLIPS members examine system patterns via a Robot diagram.

Rad Resource: Donella Meadows’ book, Thinking in Systems.

The American Evaluation Association is celebrating this week with our colleagues involved in ECLIPS—Evaluation Communities of Learning, Inquiry, and Practice about Systems—and the AEA Systems in Evaluation Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Greetings! We are Beverly Parsons (InSites and ECLIPS principal investigator) and Veronica Thomas (Howard University and ECLIPS external advisor). Today we introduce you to an exploratory research project, Evaluation Communities of Learning, Inquiry, and Practice about Systems (ECLIPS). It’s funded through a National Science Foundation grant to InSites.

The ECLIPS Community of Practice is 15 evaluators involved in STEM (science, technology, engineering, and math) education evaluations. During 18 months of webinars and annual meetings, ECLIPS members discussed and applied system concepts—especially systems dynamics and complex adaptive systems concepts—to their work. This week we share examples of our learning through this exploratory project.

Lessons Learned: A Systems-Oriented Evaluation?

We’re using a definition of a system from Meadows, Thinking in Systems: “A system is an interconnected set of elements that is coherently organized in a way that achieves something.”What the system achieves may or may not be what we want.

ECLIPS members identified the systems of relevance to their work. They looked at patterns within the systems, paying particular attention to culture and social justice – two areas that I (Veronica) kept in the forefront of the ECLIPS work.

ECLIPS1In this type of systems-oriented evaluation, we pay attention not only to activities and results but also to patterns; norms, infrastructure, and policies; and paradigms. You’ll hear more this week about how ECLIPS members have used the iceberg diagram to go deeper in their thinking about systems.

Hot Tip:

Use a systems lens in your evaluation practice to:

  • ask different kinds of evaluative questions, including questions that address social justice concerns (e.g., questions about access and opportunity)
  • look for patterns that give clues about appropriate theories of change
  • find leverage points (where small changes can create large effects),
  • consider different roles as an evaluator (e.g., to include social change agent)

This Week with ECLIPS

Tuesday: Dave describes a systems view of a project in the form of a “road map” that shows where interconnections create value.

Wednesday: Lloyd and Pat provide an example of seeing and understanding patterns in systems.

Thursday: David, Ginger, and Alyssa discuss working with boundaries, relationships, and perspectives as leverage points for change.

Friday: Karen and Marah address how a systems orientation influences an evaluator’s role.

Saturday: Tarek and Matt give a wrap-up of systems concepts and connections to fuzzy logic models.

At another time, we’ll share our learning about how to use the systems concepts in more powerful ways than we could do in this exploratory project.

Rad Resources

The American Evaluation Association is celebrating this week with our colleagues involved in ECLIPS—Evaluation Communities of Learning, Inquiry, and Practice about Systems—and the AEA Systems in Evaluation Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! I’m Eric Sarriot and I am the Director of the Center for Design and Research in Sustainability (CEDARS) at ICF International. I would like to introduce you to the CEDARS blog, on Sustainable Human and Social Development, its planning and evaluation (definitely within a Global/International Development perspective).

Rad Resource – Sustainable Human and Social Development (http://cedarscenter.blogspot.com/): It’s an ongoing discussion about anything and everything related to taking sustainable health development seriously, and what this changes in the way we ‘do business’ or try to learn. After decades of talking, some say to myself, but mostly to friends and colleagues in the health development community, I’d like to expand the conversation to include anyone interested in complex systems’ evaluations, or simply way to learn to do things differently and better in global development. New content is posted about once or twice a month. More info about CEDARS is here.

Hot Tips – favorite posts: The blog has been operating for over a year now, and has attracted commentary on topics like climate change and adaptation, food security, transition to country ownership and others. A couple of my favorites are:

  • Projects Don’t “Do” Sustainability, Do They? – Trying to advance sustainability, we run into a common criticism: “I like these ideas; it makes sense, and it does or would make sense to our local partners. But really, that’s not how we work. We have 30 to 60 days to write a proposal, then staff up and kickoff a project and get deliverables. When exactly would you introduce those ideas?” This post addresses this question….
  • Emergence of Sustainability in a Complex System – Not just because you can see me on a Youtube video! This post and video link is about a conversation between health and food security evaluation professionals. Panelists discuss how sustainability can be defined and addressed practically in complex adaptive systems.

Lessons Learned – why I blog: I blog because evaluation of sustainability takes the conversation further than single programmatic sectoral and blogging invites more people in. The purpose of the blog is to inform and provoke discussion. It is an outlet for people concerned about the big social development issues of our time and who want to help all of us challenge our practice, whether working in the ‘North’ or the ‘South’.

Lessons Learned – what I’ve learned: You really have to reach beyond your tried and trusted professional network and community in order to learn new things (remember the ‘strength of weak ties?’). That’s also why I’d like to get involved in the AEA Systems in Evaluation TIG. Come and visit CEDARS!

This winter, we’re continuing our series highlighting evaluators who blog. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

<< Latest posts

Older posts >>

Archives

To top