AEA365 | A Tip-a-Day by and for Evaluators

CAT | Social Network Analysis

My name is Jessica Wakelee, with the University of Alabama at Birmingham.  As evaluators for our institution’s NIH Clinical Translational Science Award (CTSA) and CDC Prevention Research Center (PRC), one of our team’s tasks is to find ways to understand and demonstrate capacity for collaborative research.  This is a great need among the investigators on our campus applying for grant funding or preparing progress reports.  One of the tools we have found helpful for this purpose is Social Network Analysis (SNA).

To accomplish an SNA for a particular network of investigators, typically, we will collect collaboration data using a web-based survey, such as Qualtrics, unless the PI already has existing data such as a bibliography that can be mined.  We ask the PI to provide us with a list of network members, and send each one a survey asking them to check off collaborations they’ve had in the past 5 years with the other listed investigators. The most common collaborations include things like co-authored manuscripts, abstracts/presentations, co-funding on grants, co-mentorship of trainees, and other/informal scientific collaborations, but we also tailor questions to meet the interests of the investigator/project.  The result is a graphical depiction of the network as well as a variety of statistics we can use to provide context and tell a compelling story.

Hot Tip:

What are some of the ways we’ve found work best for describing translational research collaborations using SNA?

  • Reach of a center or hub to partners or clients
  • Existing collaborations among investigators, which can be compared at baseline and later time points
  • Increasing strength or quality of collaborations over time (i.e. pre-award to present)
  • Current/projected use of proposed scientific/technology Core facilities
  • Demonstrate multidisciplinary collaborations by including attributes such as area of specialty
  • Demonstrate mentorship and sustainability by including level of experience/rank

Lessons Learned:

  • To the extent possible, make the data collection instrument simple: Use check boxes and a single open text field for comments to provide context. This works well and minimizes the need for data cleaning/formatting.
  • While the software can assume reciprocity in identified relationships among investigators, having a 100% response rate allows for the most complete and accurate data. We have found it helps to have the PI of the grant send out a notice to collaborators to be expecting our survey invitation to boost the response rates.
  • Because we often prepare these analyses for grant proposals, it is important to allow time for data collection and avoid the “crunch time” when investigators are less likely to respond. The amount of time needed depends on the size of the network, but we find that about 4-6 weeks  lead time works well.

Rad Resource:

  • UCINET/NetDraw is the gold standard software for SNA, but there are free alternatives (e.g. “NodeXL”, an add-on to Excel) and free trials are available.

 

The American Evaluation Association is celebrating Translational Research Evaluation (TRE) TIG week. All posts this week are contributed by members of the TRE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Linda Stern, Director of Monitoring, Evaluation & Learning (MEL) at the National Democratic Institute.  One challenge to evaluating democracy assistance programs is developing comparative metrics for change in political networks overtime. The dynamism within political networks and the fluidity of the political environments make traditional cause-and-effect frameworks and indicators superficial for capturing context-sensitive changes in network relationships and structures.

To address these limitations, my team and I began exploring social network analysis (SNA) with political coalitions that NDI supports overseas. Below we share two examples from Burkina Faso and South Korea. 

Lesson Learned #1: Map and measure the “Rich Get Richer” potential within political networks

When supporting the development of political networks, donors often invest in the strongest civil society organization (CSO) to act as a “network hub” to quickly achieve project objectives (e.g., organize public awareness campaigns; lobby decision-makers). While this can be effective for short-term results, it may inadvertently undermine the long-term sustainability of a nascent political network.

In evaluating the development of a women’s rights coalition in Burkina Faso, we compared the “Betweenness Centrality” scores of members over time. Betweenness Centrality indicates the potential power and influence of a member by virtue of their connections and positions within the network structure.  Comparative measures from 2012 to 2014 confirmed a “Power Law Distribution” in which the number of elite members with the highest “Betweenness Centrality” scores (read power and influence) within a network tends to shrink, while those with modest or little power and influence within a network tends to grow or be “distributed” across the network.  This is known as the “Rich Get Richer” phenomenon within networks.

Lesson Learned #2: – Use “Density” metrics to understand actual and potential connectivity within a changing network

Understanding how a political network integrates new members is critical for evaluating network sustainability and effectiveness.  However, changing membership makes panel studies challenging.  In South Korea, to how founding and new organizations preferred to collaborate, compared to how they were actually collaborating, we used a spider web graph to plot the density of three kinds of linkages within the coalition: old-to-old; old-to-new; and new-to-new.  As expected, the founding organizations were highly connected to each other, as measured by in-group “density” of 74 percent.  In contrast, the new organizations were less connected to each other, with only a 27% in-group density score. We also found a relatively high in-group density (69%) of linkages between old and new members.  When we asked members who they preferred to work with, between-group density rose to 100%, indicating a strong commitment among founding members to collaborate with new members around advocacy, civic education and human rights initiatives.  However, the overlapping graphs indicated this commitment had not yet been realized.

Rad Resource – After grappling with UCINET software over the years, we finally landed on NodeExcel, a free excel-based software program.  While UCINET has more unique and complex features, for ease of managing and transformation of SNA data, we prefer NodeExcel.

The American Evaluation Association is celebrating Democracy & Governance TIG Week with our colleagues in the Democracy & Governance Topical Interest Group. The contributions all this week to aea365 come from our DG TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Kayla Brooks and I am the Monitoring and Evaluation Coordinator at One Earth Future Foundation. One of my primary responsibilities involves measuring the level of collaboration within the networks of our implementation projects. In this post, I discuss how to use social network analysis to measure collaboration and network effectiveness and provide evidence-based recommendations to your stakeholders.

Hot Tip #1 – Introduce program staff to social network analysis early and often.

Guide stakeholders through potential findings and uses of SNA. Inquire about what network information they need to help to move their program forward.

Hot Tip #2 – Use creativity to discover open-source data and collect it systematically.

Ideally, you should survey or interview most, if not all, actors in a network to collect information on their partners and relationships. However, surveys and interviews may not always be appropriate or feasible under some circumstances. Under those conditions, you can collect relationship data through open-source or internal project documents.

Hot Tip #3 – Make your data collection process manageable with the following good practices:

  • Compile an exhaustive list of relevant and specific keywords in partnership with program staff to help in your data search;
  • Perform foreign language searches if dealing with international networks; and,
  • Cross-reference your data with other existing evidence and discussions with program staff.

Hot Tip #4 -Perform regular updates and reviews of your data for accuracy and timeliness

If you are interested in measuring the change of your network over time, update your network data regularly. After each data collection update, verify the new data with stakeholders to ensure its integrity.

Rad Resource – If you are looking for affordable, easy-to-learn network analysis software, check out Gephi. Gephi is a point-and-click tool that both visualizes and analyzes network data in a single platform without requiring advanced programming skills.

Lesson Learned – Use surveys and interviews to uncover hidden details behind relationships.

Surveying or interviewing network participants can help to fill gaps where archival data falls short, such as participant motivations for being a member of the network or perceptions about the quality of relationships in the network. Surveying or interviewing network participants can also identify actors who do not share the objectives of others in the network, informing strategic decisions to perhaps dissolve certain relationships and improve network effectiveness.

Survey and interview data can also help you develop detailed profiles of stakeholders. Once important actors within the network are identified, it may be useful to assess their strategic importance through approaches like stakeholder mapping.

Good luck using social network analysis, and remember, it’s always helpful to start with a well-formulated plan that outlines why you are using it and how it will further your stakeholders’ goals.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Sheena Horton, Senior Analyst at MGT of America, Inc., and Board Member for the Southeast Evaluation Association (SEA). It is no surprise that the use of data visualization in reporting and marketing is thriving. Studies have shown that humans process visual information better and faster than text. Data visualization can provide viewers with easily understood, actionable data and be more engaging to an audience in instances when the use of simple text can fall short.

Lesson Learned: Extend your use of data visualization beyond the workplace, and apply your data design skills to evaluating and strengthening your professional network, skills, and career profile.

Hot Tip: Conduct a social network analysis to evaluate your professional social network to identify your strong connections. Look for areas in your field, specialization, geographic location, or position type and level (e.g. managerial or mid-level) where you may need to build better connections. Note the networks where you can make contributions, and identify the best connections for conducting outreach to learn more about a specific area or skill.

Rad Resources: There are numerous data visualization tools available online to help you get started analyzing your social network. Socilab can provide you with a high-level overview of your LinkedIn network connections to jumpstart your analysis.

Hot Tip: Use data visualization to take stock of your hard and soft skills to determine the range of your strengths and to pinpoint skills to develop. A simple mind maphttp://www.mindmapping.com/ of your core skills can help you see where you can build upon your current skill set, or discover new skill areas to develop. Mind mapping adds focus to your professional development brainstorming, and helps to initiate an action plan.

Rad Resource: MindMeister is a popular and user-friendly mind mapping tool that can help you to start charting your skills quickly.

Hot Tip: Include data visualization in your resume, on LinkedIn, or on your professional website to showcase your skills and your career through visual storytelling. Determine where using data visualization can be useful based on your audience and the message you want to communicate. The visualization you select should display your data appropriately and engage your audience. A minimalist design works best; be careful not to go overboard. The key is to communicate your data simply and quickly.

Rad Resources: ResumUP and Vihttp://vizualize.me/zualize.me are good starting resources to experiment with framing your data and gathering ideas for display. A visualization that works for one person’s data may not work as well with your own.

The American Evaluation Association is celebrating Southeast Evaluation Association (SEA) Affiliate Week with our colleagues in the SEA Affiliate. The contributions all this week to aea365 come from SEA Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Rebecca Woodland and I have had the pleasure of working to evaluate and cultivate organizational collaboration in a range of contexts and for many different purposes. In this post I’ll share tips that evaluators can use in the developmental, formative, and summative evaluation of inter-organizational and inter-personal collaboration. I’m excited to be delivering an AEA 2015 pre-conference workshop that goes into detail about these hot tips – maybe I’ll see you there!

Hot Tip #1 – Make collaboration less messy. Though ubiquitous, “collaboration” persists as an under-empiricized concept. One of the first things that evaluators looking to assess collaboration will need to do is to operationlize the construct. Familiarize yourself with collaboration theory and find specific suggestions for facilitating a shared understanding of collaboration in…Utilizing Collaboration Theory to Evaluate Strategic Alliances, and the Collaboration Evaluation and Improvement Framework. 

Hot Tip #2 – More collaboration is not always better. Levels of integration between organizations matter, but the scope and scale of integration should match the purpose and goals of the alliance.

  • The Levels of Organizational Integration Rubric (LOIR) describes five possible levels of inter-organizational integration and the purposes, strategies/tasks, leadership/decision-making, and communication characteristics that tend to be present in each. Use the LOIR to measure and cultivate ideal levels of inter-organizational collaboration.

Hot Tip #3 – Avoid “co-blaboration.” The evaluation of inter-personal collaboration can help organizational stakeholders avoid “collaboration lite,” whereby mere congeniality and imprecise conversation are confused with the type of disciplined inquiry vital to the diffusion of innovation and attainment of essential outcomes.

  • The Team Collaboration Assessment Rubric (TCAR) describes four fundamental elements of high-quality inter-personal collaboration: dialogue, decision-making, action, and evaluation. Evaluators are encouraged to adapt and administer the TCAR in ways that are most feasible, useful, and appropriate for the context of their program evaluation.

Hot Tip #4 – Use Social Network Analysis (SNA) methods (if you don’t already). SNA is a sophisticated, yet accessible, means for assessing organizational collaboration. Evaluators can use SNA to mathematically describe and visually see how “ties” between organizations or people form, and how these “links” may influence program implementation and the attainment of desired outcomes.

Rad Resource:

Coalitions that Work® offers excellent tools for evaluating coalitions and partnerships that are available in .pdf format.

Want to learn more? Register for Evaluating and Improving Organizational Collaboration at Evaluation 2015 in Chicago, IL!

This week, we’re featuring posts by people who will be presenting Professional Development workshops at Evaluation 2015 in Chicago, IL. Click here for a complete listing of Professional Development workshops offered at Evaluation 2015. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Sophia Guevara, Program Co-Chair for the Social Network Analysis (SNA) TIG.  This week, several evaluation professionals have shared with this blog’s readers their thoughts on social network analysis. With posts discussing logic models to examples of the application of social network analysis on a wide-range of evaluation questions, you’ve hopefully gained a better understanding of it.

Rad Resource: The SNA in Evaluation LinkedIn group. This group provides TIG group members with an opportunity to discuss topics of interest for those utilizing or learning about social network analysis.

Rad Resource: Join the SNA TIG group. As a member, make sure to make use of the eGroup discussion option.

Rad Resource: SNA TIG business meeting. If you are thinking of joining the TIG or have already joined and are looking to connect with other evaluation professionals making use of SNA, the business meeting is an excellent place to do just that. The SNA TIG business meeting is held at the annual American Evaluation Association conference.

Rad Resource: AEA public eLibrary and the Coffee Break Archive. There are a variety of resources that can help you learn more about the topic. For example, if you are looking to learn more about the use of SNA related-programs, check out Dr. Geletta’s coffee break webinar focused on importing spreadsheet data into Gephi.

The American Evaluation Association is celebrating Social Network Analysis Week with our colleagues in the Social Network Analysis Topical Interest Group. The contributions all this week to aea365 come from our SNA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Lopez KemisA

Hello from Andres Lazaro Lopez and Mari Kemis from the Research Institute for Studies in Education at Iowa State University. As STEM education becomes more of a national priority, state governments and education professionals are increasingly collaborating with nonprofits and businesses to implement statewide STEM initiatives. Supported by National Science Foundation funding, we have been tasked to conduct a process evaluation of the Iowa statewide STEM initiative in order to both assess Iowa’s initiative and create a logic model that will help inform other states on model STEM evaluation.

While social network analysis (SNA) has become commonly used to examine STEM challenges and strategies for advancement (particularly for women faculty, racial minorities, young girls, and STEM teacher turnover), to our knowledge we are the first to use SNA specifically to understand a statewide STEM initiative’s collaboration, growth, potential, and bias. Our evaluation focuses specifically on the states’ six regional STEM networks, their growth and density over the initiatives’ years (‘07-’15), and the professional affiliations of its collaborators. How we translated that into actionable decision points for key stakeholders is the focus of this blog.

Lessons Learned: With interest in both the boundaries of the statewide network and ego networks of key STEM players, we decided to use both free and fixed recall approaches. Using data from an extensive document analysis, we identified 391 STEM professionals for our roster approach. We asked respondents to categorize this list by people they knew and worked with. Next, the free recall section allowed respondents to list professionals they rely on most to accomplish their STEM work and their level of weekly communication – generating 483 additional names not identified with the roster approach. Both strategies allowed us to measure the potential and actual collaboration along the lines of the well-known network of STEM professionals (roster) and individual’s local networks (free recall).

Lopez KemisB

Lessons Learned: The data offered compelling information for both regional and statewide use. Centrality measurements helped identify regional players that had important network positions but were underutilized. Network diameter and clique score measurements informed the executive council of overall network health and specific areas that require initiative resources.

Lessons Learned: Most importantly, the SNA data allowed the initiative to see beyond the usual go-to stakeholders. With a variety of SNA measurements and our three variables, we have been successful in identifying a diverse list of stakeholders while offering suggestions of how to trim down the networks’ size without creating single points of fracture. SNA has been an invaluable tool to classify formally and evaluate the logistics of key STEM players. We recommend other STEM initiatives interested in using SNA to begin identifying a roster of collaborators early in the development of their initiative.

The American Evaluation Association is celebrating Social Network Analysis Week with our colleagues in the Social Network Analysis Topical Interest Group. The contributions all this week to aea365 come from our SNA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hi, I’m Rebecca Woodland, an Associate Professor of Educational Leadership at UMass Amherst. If there is one thing that I know for certain it’s that relationships matter and how we are connected influences the quality and outcomes of our shared endeavors. Social Network Analysis (SNA) has had a profound influence on my evaluation work. I want to introduce and encourage evaluators (who may not know much about SNA) to consider integrating it into their own practice.

Simply put, SNA is all about telling the story of how “ties” between people or groups form, and how these “links” may influence important program objectives and outcomes. With SNA you can mathematically describe and visually see connections between people. You can use SNA to explain and predict how ties between “actors” influence the attainment of program goals.

Hot Tips: Evaluators can use SNA to address a wide-range of pressing program evaluation questions such as these:

  1. Want to know whether a program has the capacity to spread a new or novel intervention? SNA was used to evaluate school-level capacity to support or constrain instructional innovation.
  2. Want to know how large, inter-agency partnerships develop and how inter-agency collaboration correlates with intended program outcomes? Evaluators used SNA to track the development and impact of a Safe Schools/Healthy Students inter-agency community mental health network.
  3. Want to know who influences the budgeting and disbursement of funds for advocacy programs in fragile environments? SNA was used to map the flow of resources and funding patterns for new-born survival activities in northern Nigeria.

Lesson Learned: Possibly the biggest wow factor is that SNA enables the creation of illustrative visuals that display complex information, such as intra-organizational communication flow and the location of network “brokers,” “hubs,” “isolates” and “cliques”, in user-friendly ways.

WoodlandImage via under Creative Commons Attribution 3.0 License

Rad Resources

  • ®Visualyzer is an easy to use program (with a 30-day free trial) that enables you to create socio-grams on any network of interest to you.

The American Evaluation Association is celebrating Social Network Analysis Week with our colleagues in the Social Network Analysis Topical Interest Group. The contributions all this week to aea365 come from our SNA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, I’m John Burrett, of Haiku Analytics Inc., Ottawa. One serious problem with logic models is that they usually leave out external influences and feedback effects, even when they may be important, because they make the model “too complex”. It is good to simplify, but ignoring important influences on program success when planning an evaluation may lead evaluators to fail to collect important data and to misinterpret results.

Trying to embrace complexity by drawing a web of boxes and arrows is not helpful: it’s too complex to use and explain and will drive your audience away. This will probably come only from the mind of the evaluator or program manager, thereby easily missing important external influences and other complexities.

Hot Tip: I recently stumbled onto an alternative approach during a mapping of factors of cause and effect related to a complex policy problem. Data was obtained from an expert panel, developing a matrix linking a number of factors with an estimate of strength and direction of relationship between them. Mapping this with network analysis software helped the panel to visualize what they had created.

It followed that this form of data could generate outcomes chains and logic models. Here’s a simple example: a program supporting trades training by providing grants to students and developing state of the art teaching materials in collaboration with trade schools drives the immediate outcomes of…

  • Students gaining the ability to take training and
  • Currency and quality of the training being improved, in order to achieve
  • The ultimate outcome of increased employment.

Exogenous effects influencing these results include cost of living, demand for skills and technical changes affecting the training’s currency. The size of the nodes indicates betweenness centrality, identifying those factors that connect many influences, thus propagating certain effects. The width of the edges indicates the hypothesized strength of influence. Possible unintended effects and a feedback loop are also shown.

Burrett

Lesson Learned: A key advantage of this approach is that that it creates a logic model using expert knowledge, rather than simply an evaluator/manager’s understanding of a program. This could also include other sources of information like findings from literature and program stakeholders’ experiences. Importantly, you could do this without imposing any prior idea of the logic model on those providing the cause-effect data other than including the program/outputs/activities and specifying the immediate/intermediate and ultimate intended outcomes.

A second major advantage is that the logic model utilizes network metrics generated from the data, so how the program and influences are expected to be related can be analyzed. For instance, factors that are thought to have an important role in propagating effects across the system would show high betweenness/eigenvector centralities.

The American Evaluation Association is celebrating Social Network Analysis Week with our colleagues in the Social Network Analysis Topical Interest Group. The contributions all this week to aea365 come from our SNA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I’m Bethany Laursen, an independent consultant and evaluation specialist for several units at the University of Wisconsin. I fell in love with social network analysis (SNA) as a graduate student because SNA gave me words and pictures to describe how I think. Many evaluators can relate to my story, but one of the challenges to using SNA in evaluation is identifying what counts as a “good” network structure.

Hot Tip: Identify keywords in your evaluation question(s) that tell you what counts as a “good” outcome or goal for your evaluand. An example evaluation question might be: “How can we improve the collaboration capacity of our coalition?” The stated goal is “collaboration capacity.”

Hot Tip: Use published literature to specify which network structure(s) promote that goal. Social networks are complex, requiring rigorous research to understand which functions emerge from different structures. It would be unethical for an evaluator to guess. Fortunately, a lot has been published in journals and in gray and white papers,

Continuing our example, we need to research what kinds of networks foster “collaboration capacity” so we can compare our coalition’s network to this standard and find areas for improvement. You may find a robust definition of “collaboration capacity” in the literature, but if you don’t, you will have to specify what “collaboration capacity” looks like in your coalition. Perhaps you settle on “timely exchange of resources.” Now, what does the literature say about which kinds of networks promote “timely exchange of resources”? Although many social network theories cut across subject domains, it’s best to start with your subject domain (e.g. coalitions) to help ensure assumptions and definitions mesh with your evaluand. Review papers are an excellent resource.

Lesson Learned: Although a lot has been published, many gaps remain. Sometimes the SNA literature may not be clear about which kinds of networks promote the goals you’ve identified for your evaluand. In this case, you can either 1) do some scholarship to synthesize the literature and argue for such a standard, or 2) go back to your evaluation question and redefine the goal in narrower terms that are described in the literature.

In our example, the literature on coalition networks may not have reached consensus about which types of networks promote timely exchange of resources. But perhaps reviews have been published on which types of brokers foster diversity in coalitions. You can either 1) synthesize the coalition literature to create a rigorous standard for “timely exchange of resources,” or 2) reframe the overall evaluation question as, “How can brokers improve diversity in our coalition’s network?”

Rad Resources:

This white paper clearly describes network structures that promote different types of conversations in social media

This short webinar reports a meta-synthesis of which networks promote adaptive co-management capacity at different stages of the adaptive cycle

Different network structures promote different system functions. This is the take home slide from the Rad Resource on ACM capacity. In this case, the evaluand's network goal is timely social learning, collective action, and resilience.

Different network structures promote different system functions. This is the take home slide from the Rad Resource on ACM capacity. In this case, the evaluand’s network goal is timely social learning, collective action, and resilience.

The American Evaluation Association is celebrating Social Network Analysis Week with our colleagues in the Social Network Analysis Topical Interest Group. The contributions all this week to aea365 come from our SNA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Older posts >>

Archives

To top