AEA365 | A Tip-a-Day by and for Evaluators

TAG | logic models

I’m Jennifer Grove, Prevention Outreach Coordinator at the National Sexual Violence Resource Center (NSVRC), a technical assistance provider for anti-sexual violence programs throughout the country.  I’ve worked in this movement for nearly 17 years, but when it comes to evaluation work, I’m a newbie.  Evaluation has been an area of interest for programs for several years now, as many non-profit organizations are tasked with showing funders that sexual violence prevention work is valuable.  But how do you provide resources and training on a subject that you don’t quite understand yourself?  Here are a few of the lessons I’ve learned on my journey so far.

Lesson Learned: An organizational commitment to evaluation is vital.   I’ve seen programs that say they are committed to evaluation hire an evaluator to do the work.  This approach is shortsighted.  When an organization invests all of its time and energy into one person doing all of the work, what happens when that person leaves?  We like to think of evaluation as long-term and integrated into every aspect of an organization.  Here at the NSVRC, we developed a Core Evaluation Team made up of staff who care about or are responsible for evaluation. We contracted with an evaluator to provide training, guide us through hands-on evaluation projects, and provide guidance to the Team over the course of a few years.   We are now two years into the process, and while there have been some staffing changes that have resulted in changes to the Team structure, efforts have continued without interruption.

Lesson Learned: Evaluation capacity-building takes time.     We received training on the various aspects of evaluation and engaged in an internal evaluation project (complete with logic model, interview protocol, coding, and final report).  According to the timeline we developed at the beginning of the process, this should have taken about eight months.  In reality, it took over 12.  The lesson learned here is this:  most organizations do not have the luxury of stopping operations so that staff can spend all of their time training and building their skills for evaluation.  The capacity-building work happens in conjunction with all of the other work the organization is tasked with completing. Flexibility is key.

Hot Tip: Share what you’ve learned.  The most important part of this experience is being able to share what we are learning with others.  As we move through our evaluation trainings, we are capturing our lessons learned and collecting evaluation resources so that we can share them with others in the course of our technical assistance and resource provision.

Rad Resource: Check out an online learning course developed by the NSVRC, Evaluating Sexual Violence Prevention Programs: Steps and strategies for preventionists.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi! This is Laura Downey with Mississippi State University Extension Service. In my job as an evaluation specialist, I commonly receive requests to help colleagues develop a program logic model. I am always thankful when I receive such a request early in the program development process. So, I was delighted a few weeks ago when academic and community colleagues asked me to facilitate the development of a logic model for a grant proposing to use a community-based participatory research (CBPR) approach to evaluate a statewide health policy. For those of you who are not familiar with CBPR, it is a collaborative research approach designed to ensure participation by communities throughout the research process.

As I began to assemble resources to inform this group’s CBPR logic model, I discovered a Conceptual Logic Model for CBPR available on the University of New Mexico’s School of Medicine, Center for Participatory Research, website.


Clipped from http://fcm.unm.edu/cpr/cbpr_model.html

Rad Resource:

What looked like a simple conceptual logic model at first glance was actually a web-based tool complete with metrics and measures (instrument) to assess CBPR processes and outcomes. Over 50 instruments related to the most common concepts in CBPR, concepts such as organizational capacity; group relational dynamics; empowerment; and community capacity are profiled and available through this tool. The profile includes the instrument name; a link to original source; the number of items in the instrument; concept(s) original assessed; reliability; validity; and identification of the population created with.

With great ease, I was able to download surveys to measure those CBPR concepts in the logic model that were relevant to the group I was assisting. Given the policy-focus of that specific project, I explored those measures related to policy impact.

Hot Tip:

Even if you do not typically take a CBPR approach to program development, implementation, and/or evaluation, the CBPR Conceptual Logic Model website might have a resource relevant to your current or future evaluation work.

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello! My name is Rhonda Schlangen and I’m an evaluation consultant specializing in advocacy and development.

By sharing struggles and strategies, evaluators and human rights organizations can help break down the conceptual, capacity and cultural barriers to using monitoring and evaluation (M&E) to support human rights work. In this spirit, three human rights organizations candidly profiled their efforts in a set of case studies recently published by the Center for Evaluation Innovation.

Lessons learned:

  • Logic models may be from Mars: Evaluation can be perceived as at cross-purposes to human rights efforts. The moral imperative of human rights work means that “results” may be potentially unattainable. Planning for a specific result at a point in time risks driving work toward the achievable and countable. Learning-focused evaluation can be a useful entry point, emphasizing evaluative processes like critical reflections and one-day ‘good enough’ evaluations.
  • Rewrite perceptions of evaluation orthodoxy: There’s a sense in the human rights groups reviewed for this project that credible evaluation follows narrow and rigid conventions and must produce irrefutable proof of impact. Evaluators can help recalibrate perceptions by focusing on a broader suite of appropriate approaches complex change scenarios (such as outcome mapping or harvesting).
  • Methods are secondary: Equally important, if not more critical than, the tools and methods used is the confidence and capacity of staff and managers in using them. Investing in training and support is important. Prioritizing self-directed, low-resource internal learning as an integrated part of program work also helps cultivate a culture of evaluation. (See this presentation on organizational learning for an overview of organizational learning and stay tuned for an upcoming paper from the Center for Evaluation Innovation on the topic.)

Rad Resources: Evidence of change journals: Excel workbooks populated with outcome categories, these journals are shared platforms where human rights and other campaigners can log signs of progress and change. The tool facilitates real time tracking and analysis of developments related to a human rights issue and advocacy efforts.

Intense period debriefs: Fitting into the slipstream of advocacy and campaigns, these are a systematic and simple way to review what worked, and what didn’t, after particularly intense or critical advocacy moments. The tool responds to the inclination of advocates to keep moving forward but creates space for collective reflection.

People-centered change models: A Dimensions of Change model, such this one developed by the International Secretariat of Amnesty International, can serve as a shared lens for work that spans different types of human rights and different levels—from global to community.  

Get involved: Evaluators can contribute to the discussion with the human rights defenders through online forums like the one facilitated by New Tactics in Human Rights.

Clipped from http://www.evaluationinnovation.org/

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

We are Alexandra Hill and Diane Hirshberg, and we are part of the Center for Alaska Education Policy Research at the University of Alaska Anchorage.  The evaluation part of our work ranges from tiny projects – just a few hours spent helping someone design their own internal evaluation – to rigorous and formal evaluations of large projects.

In Alaska, we often face the challenge of conducting evaluations with very small numbers of participants in small, remote communities. Even in Anchorage, our largest city, there are only 300,000 residents. We also work with very diverse populations, both in our urban and rural communities. Much of our evaluation work is on federal grants, which need to both meet federal requirements for rigor and power, and be culturally responsive across many settings.

Lesson Learned: Using mixed-methods approaches allows us to both 1) create a more culturally responsive evaluation; and 2) provide useful evaluation information despite small “sample” sizes. Quantitative analyses often have less statistical power in our small samples than in larger studies, but we don’t simply want to accept lower levels of statistical significance, or report ‘no effect’ when low statistical power is unavoidable.

Rather, we start with a logic model to ensure we’ve fully explored pathways through which the intervention being evaluated might work, and those through which it might not work as well.  This allows us to structure our qualitative data collection to explore and examine the evidence for both sets of pathways.  Then we can triangulate with quantitative results to provide our clients with a better sense of how their interventions are working.

At the same time, the qualitative side of our evaluation lets us lets us build in measures that are responsive to local cultures, include and respect local expertise, and (when we’re lucky) build bridges between western academic analyses and indigenous knowledge. Most important, it allows us to employ different and more appropriate ways of gathering and sharing information across indigenous and other diverse communities. 

Rad Resource: For those of you at universities or other large institutions that can purchase access to it we recommend SAGE Research Methods.  This online resource provides access to full text versions of most SAGE research publications, including handbooks of research, encyclopedias, dictionaries, journals, and ALL the Little Green Books and Little Blue Books.

Rad Resource: Another Sage-sponsored resource is Methodspace, an online network for researchers. Sign-up is free, and Methodspace posts selected journal articles, book chapters and other resources, as well as hosting online discussions and blogs about different research methods.

Rad Resource: For developing logic models, we recommend the W.K. Kellogg Foundation Logic Model Development Guide.

Clipped from http://www.methodspace.com/

The American Evaluation Association is celebrating Alaska Evaluation Network (AKEN) Affiliate Week. The contributions all this week to aea365 come from AKEN members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

This is Kim Snyder, Associate at ICF International, Rene Lavinghouze, Evaluation Team Lead for CDC’s Office on Smoking and Health, and Patricia Rieker, Adjunct Professor of Sociology at Boston University and Associate Professor of Psychiatry at Harvard Medical School. We have been investigating public health program infrastructure as an ignored component of the left hand side of logic models.

Evaluators often are asked to focus on outcomes or the right hand side of the logic model. How often is life better for people because of a successful public health program (e.g. fewer heart attacks, less exposure to second hand smoke)? While we value the importance of this type of evaluation, we were concerned that the inputs or foundation of our activities are not fully understood. If we don’t start out with the foundation that enables organizational capacity, how are we supposed to really know what affects the outcomes on the right side of logic models?

Lesson Learned:

  • The left hand side of the logic model is something that is rarely defined or explained in public health programs. Take a look at the Office on Smoking and Health’s logic model for eliminating nonsmokers’ exposure to secondhand smoke. Under Inputs, what is meant by “State health department and partners”? If it is interpreted and replicated differently can we expect the same outcomes?

Snyder 2

So we decided it was important to define and study what functioning public health program infrastructure (or the foundation of public health outcomes) looks like. Previous work, a literature review across public health programs (see Rad Resource) and data from 19 tobacco control programs were used to further our understanding of functioning program infrastructure.

Building on previous work (that is currently in press with the Journal of Public Health Management & Practice) we define infrastructure as a key component and the foundation or platform that supports capacity, implementation, and sustainability of program initiatives; a definable entity, a cyclical process and part of a larger system that requires constant vigilance to be effectively maintained. Using a grounded theory approach we developed the Component Model of Infrastructure or CMI for short.

Rad Resource: Infrastructure: More Than Platforms For Moving Vehicles available in the American’ Evaluation Association (AEA) Public eLibrary.

Sneak Peek:

We are still refining the CMI and hope to share a final version this year. We define five core components of public health program infrastructure:

  • Networked Partnerships,
  • Multi-Level Leadership,
  • Engaged Data,
  • Managed Resources, and
  • Responsive Plans/Planning.

We see the CMI as a practical model of public health program infrastructure that could provide the framework that grant planners, evaluators, and program implementers need to measure success, to link infrastructure to capacity, and to increase the likelihood that health achievements will be sustainable.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Greetings aea365 community! I’m Ann Emery and I’ve been both an external evaluator and an internal evaluator. Today I’d like to share a few of the reasons why I absolutely love internal evaluation.

Lessons Learned: Internal evaluation is a great career option for fans of utilization-focused evaluation. It gives me opportunities to:

  • Meet regularly with Chief Operating Officers and Executive Directors, so evaluation results get put into action after weekly staff meetings instead of after annual reports.
  • Participate on strategic planning committees, where I can make sure that evaluation results get used for long-term planning.

Lessons Learned: Internal evaluators often have an intimate understanding of organizational history, which allows us to:

  • Build an organizational culture of learning where staff is committed to making data-driven decisions.
  • Create a casual, non-threatening atmosphere by simply walking down the hallway to chat face-to-face with our “clients.” I hold my best client meetings in the hallways and in the mailroom.
  • Use our organizational knowledge to plan feasible evaluations that take into account inevitable staff turnover.
  • Tailor dissemination formats to user preferences, like dashboards for one manager and oral presentations for another.
  • Participate in annual retreats and weekly meetings. Data’s always on the agenda.

Lessons Learned: Internal evaluators can build evaluation capacity within their organizations in various ways:

  • I’ve co-taught Excel certification courses to non-evaluators. Spreadsheet skills can help non-evaluators feel more comfortable with evaluation because it takes some of the mystery out of data analysis.
  • I’ve also led brown bags about everything from logic models to research design. As a result, I’ve been more of a data “coach,” guiding staff through evaluation rather than making decisions on their behalf.

Hot Tips: Internal evaluators can use their skills to help their organizations in other ways, including:

  • Volunteering at program events. When I served food to child and teen participants at Thanksgiving, my time spent chatting with them helped me design more responsive data collection instruments.
  • Contributing to organization-wide research projects, such as looking for patterns in data across the participants that programs serve each year.
  • Partnering with graduate interns and external evaluators to conduct more in-depth research on key aspects of the organization.

Cool Trick: Eun Kyeng Baek and SeriaShia Chatters wrote about the Risks in Internal Evaluation. When internal evaluators get wrapped inside internal politics, we can partner with external evaluators like consulting firms, independent consultants, and even graduate interns. Outsider perspectives are valuable and keep things transparent.

Rad Resources:

AEA is celebrating Internal Evaluators TIG Week. The contributions all week come from IE members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· · · · · · · · ·

I’m Corrie Whitmore, an internal evaluator working for Southcentral Foundation. SCF is an Alaska Native Owned and Operated healthcare organization serving approximately 60,000 Alaska Native and American Indian people living in Anchorage, Matanuska-Susitna Valley, and 60 rural villages in the Anchorage Service Unit. Our organization has had program evaluation in-house since 2009, so our small department focuses on helping people in operations understand why evaluation matters and how it fits into what they do every day.

Hot Tip: Build relationships! Sometimes the most efficient way to get things done is not the best way to move the project forward – making time to listen, ask questions, and puzzle out what an evaluation will offer people “in the trenches” is very important.

Hot Tip: Get out of the office!  Going to the programs we work with and watching operations unfold builds trust with our customers, teaches us about their processes and data collection, and shows them we care about what they do.

Hot Tip: Ask concrete questions! It can be difficult for people to puzzle out logic models or identify program objectives, if they don’t have a background in that area, but most practitioners can confidently answer questions like:

  1. What does success look like?
  2. How do you know if things are going well?
  3. How do you know if something needs to change
  4. If you had a magic wand, what one thing would you change?
  5. What helps you make decisions today?

Hot Tip: Get something on paper – then tear it up! We use Anne Lamott’s idea of first drafts  to encourage writing things down early in the process. It’s much easier for our clients to identify what sounds appropriate and what feels “off“ once they have a document in hand to edit. Going through multiple drafts offers customers a chance to grapple with the language used, cultural appropriateness, and feasibility of the evaluation plan at all stages of the project, increasing their ownership of the final product.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

We are Veronica Smith and Chris Metzner. Veronica is principal of data2insight, an evaluation and research firm with data visualization expertise and Chris Metzner is a freelance graphic designer.

Matt Keene and Chris created the popular fuzzy logic model for the Oregon Paint Stewardship Pilot Program. The fuzzy logic model concept (FLM) evolved out of the desire to make the traditional logic model dynamic, non-linear and stakeholder-friendly. This logic model embraces fluid and approximate reasoning and provides an evaluation reporting framework.

Oregon Paint Stewardship Pilot Program Image

We provide tips, tricks and resources for illustrating and disseminating your FLM.

Illustration

Rad Resources: Creating a FLM begins with a sketch of the theory of action and theory of change (get out your pencil and paper!). Once finalized, it’s time to digitize. For DIY’ers, we recommend OmniGraffle 5 Professional ($199.99), Adobe Illustrator CS5 ($599.00) or DoView ($79.95).

OmniGraffle offers quick page-layout design and is great for Mac users. DoView turns outcome models into user-friendly web pages. Adobe Illustrator is the graphic design industry standard. It allows total control over the project and is a complex tool.

Hot Tip: Hiring a graphic designer can cost can anywhere from $60-100/hour. You save time when the designer translates your sketch into an illustration tailored to stakeholders’ needs using sound visual design principles.  The illustration for the Oregon paint program (above) would cost somewhere between $2000-3500.

Dissemination

Rad Resources: With a digital illustration you are ready to get online. First, purchase a domain name ($12.99/yr) and hosting plan ($6.99/yr). We recommend DreamHost or Hostmonster.

If your FLM doesn’t require advanced user interaction, purchase a pre-built website from the hosting company. WordPress, free software installed by your host company is another option. WordPress offers easy blog creation and site-enhancing plugins, but requires basic web development knowledge. Apply a theme to give your website for a professional look. Using Adobe Dreamweaver CS5 ($399.00) is for someone with advanced website coding knowledge.

Hot Tip: You can also hire a designer to bring the FLM to life. A web designer will charge $100-150/hour. Meet with the designer and communicate your goals in order to get a cost estimate for website creation. A designer can create a site for as little as $1000. The value add of a good designer is the expertise to secure the website, monitor traffic and optimize content for search engines. Website design costs can be offset by time and money saved disseminating results electronically.

Beyond the basics

Cool Trick: Employ social media and video technologies to increase stakeholder engagement and evaluation use.

Run with it

FLM opens the door for enhanced stakeholder engagement, reporting, results dissemination and evidence-based decision making. Let us know how you use FLMs to meet your client needs.

We’re celebrating Data Visualization and Reporting Week with our colleagues in the DVR AEA Topical Interest Group. The contributions all this week to aea365 come from our DVR members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting DVR resources. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

·

Hi, my name is Corey Smith and I am a brand new student in the interdisciplinary Ph.D. program in evaluation at Western Michigan University. Recently we were fortunate to have Dr. Rodney Hopson of Duquesne University in Pittsburgh, PA visit us. Dr. Hopson gave a presentation titled “Evaluation and the Public Good.” Below I discuss some lessons I learned.

The presentation was a discussion of for whom we, as evaluators, do evaluation, how we practice evaluation, who benefits from it, and to what end.

Lessons Learned: One of the main points Dr. Hopson discussed in his presentation was the need for culturally adaptive tools in evaluation. Take for example the logic model, a tool gaining in popularity in the field of evaluation. Whether you like them or not, logic models are being requested by evaluation clients and used increasingly in theory-driven evaluations. For program stakeholders who do not relate or connect with the visual form that regular logic models often take, they have limited or no use. Dr. Hopson emphasized that the model developed should fit the way stakeholders view the program. As a relatively new student in evaluation, I have yet to make up my own mind about their usefulness or relevance to my own practice of evaluation, however I think that the way they are presented to stakeholders and the different forms they can take is an interesting area of study. Alternative logic models and culturally adaptive evaluation tools can help us engage with stakeholders and better understand the intricacies of the programs we are evaluating and people that they affect.

Rad Resource: Matt Keene & Chris Metzner’s AEA coffee break webinar on Fuzzy Logic Models – AEA Members can log in and  view the recording for free here.

This site was the focus of the webinar as well as a presentation done at Evaluation 2011 by Matt Keene and Chris Metzner on fuzzy logic models. It shows how they were able to transform a traditional logic model into something interactive and visually appealing while still visually representing the program theory: http://paintstewardshipprogram.com/

Rad Resource – Doview Software:  DoView was also presented in an AEA Coffee Break Webinar by Paul Duignan (available free to members here). The software provides an easy way to develop models in real time, and in settings where stakeholders can be active participants. The result will be a model which shows the program through their eyes, the ultimate goal for an adaptive logic model.

All this week, we’re highlighting posts from colleagues at Western Michigan University as they reflect on a recent visit from incoming AEA President Rodney Hopson. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello, my name is Scott Chaplowe, and I am a Senior Monitoring and Evaluation (M&E) Officer with the International Federation Red Cross and Red Crescent Societies (IFRC). The IFRC has a lot of stakeholders –communities, 186 National Societies, local governments, partners, donors, etc. One part of my job is to lead M&E trainings that empower our local stakeholders to better understand and participate in the M&E process. There are two big challenges I encounter in building people’s understanding and practice of M&E:

  1. M&E is not the most exciting (“sexy”) subject that people gravitate towards. Resistance can be heightened for the very reason stakeholders need M&E training; They do not understand and value M&E, but may feel threatened by it, fearing it will burden them.
  2. M&E systems can be a straightjacket, imposing outside, “technocentric” methods that alienate rather than foster local participation in project design, monitoring, and evaluation.

Hot Tip: I like to address both of these challenges through fun, participatory methods to demystify M&E, so people better understand, participate in, and own the M&E process. For example, one way I introduce the key concepts of a logframe is with an activity I call the Logical Bridge. Training participants construct a bridge using straws, tape, scissors and string. The bridge is then used as a simple metaphor to discuss project design for a real bridge – inputs, activities, outputs (the bridge), outcomes (i.e. increase trade and between two towns), and ultimate goal (i.e. improved livelihoods). Everyone can relate to a bridge, and I have found this activity to be a fun, useful springboard into the logical hierarchy of results (whatever semantics is used for each level of the logframe). It also has the added benefit of teambuilding).

Hot Tip & Rad Resource: Consider using illustrations or cartoons to convey key M&E messages – and not just in publications, but also in presentations. Show a cartoon during a training and ask participants what it means to them, whether they can relate (or not), and what we might be able to learn from it. Check out the cartoons in our new IFRC Project and Program M&E Guide!

Rad Resource: Come check out my “Fun and Games with Logframes” professional development workshop at the upcoming annual AEA conference in Anaheim to experience more fun, innovative ways to reinforce the understanding and use of logframes. Wednesday, November 2, 12:00 PM to 3:00 PM. Registration is required – more information online here.

Rad Resource: The guide, “100 Ways to Energise Groups: Games to Use in Workshops, Meetings and the Community,” may not be specifically on M&E, but is useful for lubricating the thought process for how fun and games can be infused into M&E training, and other activities.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

<< Latest posts

Older posts >>

Archives

To top