AEA365 | A Tip-a-Day by and for Evaluators

CAT | Quantitative Methods: Theory and Design

Hello! We’re Allan Porowski from ICF International and Heather Clawson from Communities In Schools (CIS). We completed a five-year, comprehensive, mixed-method evaluation of CIS, which featured  several study components – including three student-level randomized controlled trials; a school-level quasi-experimental study; eight case studies; a natural variation study to identify what factors distinguished the most successful CIS sites from others; and a benchmarking study to identify what lessons CIS could draw from other youth-serving organizations.  We learned a lot about mixed-method evaluations over the course of this study, and wanted to share a few of those lessons with you.

Lessons Learned:

  • Complex research questions require complex methods. Disconnects exists between research and practice because the fundamental research question in an impact evaluation (i.e., Does the intervention work?) provides little practical utility for practitioners in their daily work. CIS leadership not only wanted to know whether CIS worked, but also how it worked, why it worked, and in what situations it worked so they could engage in evidence-informed decision making. These more nuanced research questions required a mixed methods approach. Moreover, CIS field staff already believed in what they were doing – they wanted to know how to be more effective. Mixed methods approaches are therefore a key prerequisite to capture the nuance and the process evaluation findings desired by practitioners.
  • Practitioners are an ideal source of information for determining how much “evaluation capital” you have. CIS serves nearly 1.3 million youth in 25 states, which opens up the likelihood that different affiliates may be employing different language, processes, and even philosophies about best practice. In working with such a widespread network of affiliates, we saw the need to convene an “Implementation Task Force” of practitioners to help us set parameters around the evaluation. This group met monthly providing incredibly helpful in (a) identifying language commonly used by CIS sites nationwide to include in our surveys, (b) reviewing surveys and ensuring that they were capturing what was “really happening” in CIS schools, and (c) identifying how much “evaluation capital” we had at our disposal (e.g., how long surveys could take before they posed too much burden).
  • The most important message you can convey: “We’re not doing this evaluation to you; we’re doing this evaluation with you.” Although it was incumbent upon us as evaluators to be dispassionate observers, that did not preclude us from engaging the field. Evaluation – and especially mixed-methods evaluation – requires the development of relationships to acquire data, provide assistance, build evaluation capacity, and message findings. As evaluators, we share the desire of practitioners to learn what works. By including practitioners in our Implementation Task Force and our Network Evaluation Advisory Committee, we were able to ensure that we were learning together and that we were working toward a common goal: to make the evaluation’s results useful for CIS staff working directly with students.

Resources:

  • Executive Summary of CIS’s Five-Year National Evaluation
  • Communities In Schools surrounds students with a community of support, empowering them to stay in school and achieve in life. Through a school-based coordinator, CIS connects students and their families to critical community resources, tailored to local needs. Working in nearly 2,700 schools, in the most challenged communities in 25 states and the District of Columbia, Communities In Schools serves nearly 1.26 million young people and their families every year.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · · ·

Hello! My name is Tina Phillips and I am the evaluation program manager at the Cornell Lab of Ornithology. I lead an NSF-funded project called DEVISE (Developing, Validating and Implementing Situated Evaluations), which is aimed at providing practitioners and evaluators with tools to assess individual learning outcomes from citizen science, or public participation in scientific research (PPSR) projects. Within the context of citizen science, we intend to test and validate a suite of instruments across different projects and assess how they perform in different settings. The first thing we did was to assess the state of citizen science evaluations, which formed the basis for a draft framework for assessing learning outcomes. This framework includes six major constructs which comprise common outcomes across diverse projects and include: Interest in science, motivation to participate, knowledge of the nature of science, skills of science inquiry, environmental stewardship behaviors, and science identity.

Lessons Learned: Developing and validating scales is hard! If you’ve done this before, you know what I mean. If you haven’t, don’t underestimate the amount of time it will take to do this well. For instance, prior to developing scales, we conducted an extensive inventory of existing scales that were aligned to our framework and relevant to STEM (Science, engineering, mathematics, and technology) and informal science learning environments. Gathering these scales and the associated literature to document their psychometric properties was labor intensive. Next, as a team, we reviewed and rated each of these scales to determine their contextual relevance to citizen science. From there, we devised a plan for testing or modifying an existing scale, or developing a brand new instrument. For example, one scale is being developed using concept mapping, another is being developed from existing scales; and another is being developed as an item data bank. Once these scales are drafted, they still need to be tested with a variety of audiences and contexts to meet satisfactory validity and reliability criteria.

Hot Tip: Seek the help of psychometricians and others who have developed valid and reliable scales.

Rad resource: Once finalized, the DEVISE toolkit will be openly available via the Citizen Science Toolkit website. This dynamic site is geared towards citizen science practitioners and provides featured projects and a host of resources for working within the citizen science arena.

Rad resource: Another great resource is the Assessment Tools for Informal Science (ATIS) website. The site offers detailed information for over 60 instruments categorized by age, domain, and assessment type. They are currently seeking reviews of instruments by end users.

The American Evaluation Association is celebrating Environmental Program Evaluation Week with our colleagues in AEA’s Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Staci Wendt and I am a Research Associate at RMC Research in Portland, Oregon. Last year, I completed my Ph.D. in Applied Psychology at Portland State University. After finishing my degree, I was concerned about how to stay current with statistical literature and how to practice techniques that I learned in school, but wasn’t currently using in my work.

Hot Tip - One day, a friend was talking with me about her fiction book club and I had an “Aha!” moment—a book club where we discussed statistics!

Who: We have a small group of people with varying knowledge and experience related to statistics and research methods. Our group is comprised of 6 members, which eases scheduling and allows each of us the opportunity to meaningfully contribute.

When: While our regular meetings are held monthly, we are also available to each other via email throughout the month. The email discussions allow for quick feedback on questions or issues that might arise within our day-to-day work.

What: At our first meeting, we discussed our goals and expectations for the group, brainstormed a list of topics we wanted to discuss, and decided on the format for our group. After this discussion the group decided that in order to make the group both useful and doable we would meet monthly but vary the meeting type. On odd-numbered months, we have formal meetings, where we discuss a pre-determined topic (such as Structural Equation Modeling). We take turns facilitating these formal meetings. The facilitator is responsible for selecting pertinent sub-topics of the theme (e.g., model fit, assumptions of the statistical test, how-to) and assigning them to each member. Each member is then responsible for creating a small “cheat-sheet” on that topic and presenting the information at our meeting. Our presentations are mostly casual in order to encourage a good environment for discussion. We also try to bring pertinent “real-world” examples, either from the literature, or from our own work. On the even-numbered months, we have informal meetings. At these meetings, we bring any specific question or topic that we want to discuss, or review information from the previous meeting. The main difference between the formal and informal meetings is that we don’t have any preparation work for the informal meetings.

Where: We rotate meeting at different group members’ homes for the formal meetings. This allows one person to take notes (which are later distributed to the group) and we have room for reference books. For the informal meetings, we try to meet at restaurants, to add to the relaxed nature of the meeting.

The most important thing is to set group goals, make adjustments as you try it out, and HAVE FUN!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Greetings. My name is Ricardo Gomez and I currently work as a Research and Evaluation Associate for the National Collegiate Inventors and Innovators Alliance. I am also a doctoral candidate in International Education at the University of Massachusetts Amherst, Center for International Education, and alumnus of the AEA-Duquesne University Graduate Diversity Internship Program.

The opinions of stakeholders are crucial because they can shape the direction of programs and can have an impact on program execution, scalability, and performance. Hence, as a researcher and evaluator, I have always been interested in finding ways to gauge the subjectivity (i.e., opinions, perceptions, attitudes, and motivations) of evaluation participants, and incorporate these into the different phases of my evaluation activities.

Lesson Learned – Q methodology is a powerful tool that evaluators can use to explore the perspectives of evaluation participants. First used and advanced by William Stephenson in the 1930s, Q is a research method that statistically identifies different points of view (or subjectivities) on a given topic based on how individuals sort a set of statements, about that topic.

Traditionally, evaluators have relied on interviews or surveys with Likert-type items to gauge the opinions of evaluation participants. These approaches are not without their drawbacks: the typical outcome of the analysis of Likert-type items is a description of pre-specified independent categories deemed relevant by the evaluator; and interviews can be time consuming and intrusive.

The outcome of a Q study, on the other hand, is a more authentic set of factors that capture people’s attitudes and perspectives about an issue. In Q method, a group of participants (the p-set), sort a sample of items (the q-set), into a subjectively meaningful pattern (the q-sort). Resulting q-sorts are analysed using correlation and factor analysis (q-analysis), yielding a set of factors whose interpretation reveals a set of points-of-view (the f-set).

Rad Resource: Click here www.broaderimpacts.org/aea2011 for an online example of a Q-sort process.

Lesson Learned: Q methodology is an important bridge between qualitative and quantitative methods in that it provides a means for analyzing the phenomenological world of a small number of individuals without sacrificing the power of statistical analysis.

Rad Resource – The International Society for the Scientific Study of Subjectivity (ISSSS) is the official organization committed to the ideas and concepts of Q methodology as enunciated by William Stephenson. ISSSS administers an email discussion list dedicated to exchange of information related to Q Methodology. To learn more about Q methodology, join ISSSS, or become a member of the email discussion list, please visit www.qmethod.org.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea3365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators

Hello, my name is Juan Paulo Ramírez, independent consultant, sole owner of “GIS and Human Dimensions, L.L.C.” How many times have you used spreadsheets or sophisticated statistical software (i.e., SAS, SPSS) to estimate frequencies of a population and you asked yourself: is it really necessary to do this using very expensive and sophisticated software? Or, spending hours and hours cleaning up the data to make it consistent within and between records and variables? Would there be a better and more efficient way to complete these trivial and time consuming tasks? There is, and Google Refine is the answer!

Lessons learned: Google Refine is a free desktop application (not a web-service) that you install on your computer (you can download it here). Google Refine allows users to seamlessly and efficiently calculate frequencies and multi-tabulate data from large datasets (i.e., hundreds of thousands of records), along with cleaning up your data. What I found is that you learn more by trial and error with Google Refine, and discover how easy it is to get the information needed in a few steps. Google Refine has saved me days of hard work! Google Refine works with numeric, time and text data and allows you to directly work with Excel files.

The following are a few examples of how I have used Google Refine: 1) Getting demographic frequencies (i.e., gender, age) and cross tabulating it with economic variables (i.e., income) and location (i.e., county). 2) Cleaning up data that it is inconsistent, since people have sometimes answered questions without any written restrictions (i.e., lengthy responses, spelling error, blank spaces). 3) When you select a date variable, Google Refine creates a bar chart with two ends that you can adjust, dragging them with your mouse to define specific time periods. 4) If you make a mistake, Google Refine allows you to undo everything you have done!

Rad resource: There are three videos available that show the potential applications of Google Refine. You can watch them here. I watched the first video once and it was enough to convince me that this was a must have application. I started using it right away, and it became one of the most essential tools that now I use in my work.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I’m Susan Kistler, the American Evaluation Association’s Executive Director, and aea365’s Saturday contributor. Our eStudy director just announced the lineup for January and February!

Lesson Learned: AEA’s eStudy offerings are online, real-time, webinar-based training right from your desktop with no need to fly, or get lost in traffic, or lose extra time away from work, or even change out of your PJs if you are so inclined. Facilitators are chosen from among the best of those offering AEA workshops and the topics are ones that are most sought-after by registrants.

Hot Tip: Registration is open to both AEA members and nonmembers, and students receive significantly discounted registration rates. For one registration fee, you may attend one or all of the sessions for a particular workshop. Here’s the January/February lineup.

Social Network Analysis
Tuesdays January 10, 17, 24, & 31, 1:00 – 2:30 PM Eastern Time
This eStudy provides an introduction to social network analysis theories, concepts, and applications within the context of evaluation, including network concepts, methods, and the software that provides for analysis of network properties. We’ll use real world examples and discussion to facilitate a better understanding of network structure, function and data collection.
Presenter: Kimberly Fredericks conducts social network analyses in her role as associate professor at The Sage Colleges. Kim is a regular author and speaker, including co-editing a New Directions for Evaluation issue on Social Network Analysis in Program Evaluation.
Cost: $150 Members, $200 Nonmembers, $80 Students

Applications of Correlation and Regression: Mediation, Moderation, and More
Wednesdays February 8, 15, 22, & 29, 1:00 – 2:30 PM Eastern Time
Regression analyses are used to describe multivariate relationships, test theories, make predictions, and model relationships. We’ll explore data issues that may impact correlations and regression, selecting appropriate models, preparing data for analysis, running SPSS analyses, interpreting results, and presenting findings to a nontechnical audience.
Presenter: Dale Berger of Claremont Graduate University is a lauded teacher of workshops and classes in statistical methods and recipient of the outstanding teaching award from the Western Psychological Association.

Cost: $150 Members, $200 Nonmembers, $80 Students

Empowerment Evaluation
Tuesday & Thursday February 21 & 23, 3:00 – 4:30 PM Eastern Time
Empowerment evaluation builds program capacity, fosters program improvement, and produces outcomes. It teaches people how to help themselves by learning how to evaluate their programs. This eStudy will introduce you to the steps of empowerment evaluation and tools to facilitate the approach.
Presenter: David Fetterman is president and CEO of Fetterman & Associates. He is the founder of empowerment evaluation and the author of over 10 books including Empowerment Evaluation Principles in Practice with Abraham Wandersman.
Cost: $75 Members, $100 Nonmembers, $40 Students

See the full descriptions and register for one, two, or all three online here.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association.

· ·

Hello!  We are Katrina Brewsaugh, and Stephen Brehm.  We are the Data Team at One Hope United, an agency providing a wide range of child and family services in Illinois, Missouri, and Florida.

Evaluators are often tasked with measuring how a program is meeting performance measures and making recommendations on whether programs that are not meeting targets can make improvements. When large gaps exist between established targets and actual performance, programs may be under pressure to close the gap in a very short amount of time, often without additional supports. Rational Target Setting Methodology (RTSM; Zirps, 2003) is an approach that uses past performance and an assessment of the current environment to set an ambitious, yet realistic trajectory for improvement or to attain desired results.

Hot Tip: RTSM can be used by evaluators as a framework to facilitate discussions with internal and external stakeholders about what amount of improvement can reasonably be expected in a 12-month time span and what resources would be needed to make a specific level of improvement.

Hot Tip: RTSM takes into account the level of support in policy, priority, resources, and training with each area weighted from zero to three. Low weighting in one area can be offset by high weighting in another. For instance, low resource allocation may be offset by a strong emphasis on staff training. Evaluators can use this knowledge to help programs design their own action plans for improvement, while potentially increasing their buy-in to the change effort.

Hot Tip: The sum of the quadrant ratings corresponds to a range in anticipated improvements.  Closing large gaps requires a multi-year effort; even if all quadrants were given the maximum rating of 3, there is no way to make more than a 50-point improvement in one year.  It is important to remember that our progress toward our goals will be proportional to the resources and efforts we direct.

Lesson Learned: It is vital that discussions and ratings are realistic.  In our practice with RTSM, groups have at times been overly optimistic about one or more categories and are then disappointed when the amount of change is less than expected.  Should any of the categories go through significant change during the year (e.g. unexpected funding cuts) the weights would have to change accordingly as well.

Rad Resource: For more information on RTSM, please visit the AEA eLibrary where we have posted materials from our 2011 Conference session or contact Katrina Brewsaugh at kbrewsaugh@onehopeunited.org.

Rad Resource:

Zirps, F. (2003). Still doing it right: A guide to quality for human service agencies. Albuquerque,    NM:IQAA Books.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Christine Paulsen and I own Concord Evaluation Group.  We evaluate media and technology-based initiatives. We regularly integrate usability testing with program evaluation to provide our clients with a more comprehensive picture of how their technologies and initiatives are performing.

As evaluators, you know that many of the programs that we evaluate today are technology-based. It is not uncommon for initiatives to provide information to their target audiences via websites, while other interventions are delivered with software applications to mobile, handheld or other devices. To properly evaluate such initiatives, the evaluator must consider the usability (user-friendliness and accessibility) of the technology components.

Usability refers to how easily users can learn and use technology.  It stands to reason that if a drop-out prevention program relies mostly on messages delivered via its website to change student behaviors, that website better be usable!  So, as evaluators, it’s crucial that we include usability assessment in our evaluations.  Usability testing (UT) methods enable us to not only gather important formative data on technological tools, UT methods also help us explain outcomes and impact during summative evaluation.

Hot Tip: Keep in mind that problems addressed early are much less expensive to fix than problems found later.

The typical UT is conducted in a one-on-one manner, with a researcher guiding the session.  The participants are provided with a list of tasks, which they will likely complete while thinking aloud. The researcher will record both subjective comments as well as objective data (errors, time on task). The test plan documents methods and procedures, metrics to be captured, number and type of participants you are going to test, and what scenarios you will use.  In developing UT test plans, evaluators should work closely with the client or technology developer to create a list of the top tasks users typically undertake when using the technology.

Hot Tip: Did you know that UT can be conducted in-person or remote (online)? While in-person testing offers a chance to observe non-verbal cues, remote testing is more affordable and offers the chance to observe a test participants in a more “authentic” environment—anywhere in the world.

Hot Tip: During formative testing, 6-8 users per homogenous subgroup will typically uncover most usability problems.  The sample size will increase if inferential statistics are needed.

Rad Resource: For a great overview of usability testing, including templates and sample documents, visit Usability.gov.

Rad Resource: For a demonstration of how to integrate UT into your evaluation toolbox, please stop by to see my presentation at AEA 2011.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Christine? She’ll be presenting as part of the Evaluation 2011 Conference Program, November 2-5 in Anaheim, California.

My name is Philippe Buteau and I am an analyst at my own small co-owned firm, Buteau and Lindley. Back in May, Susan Kistler briefly wrote about Google Refine on aea365 and prompted me to take a look. Since then, I have used Refine in a number of ways and thought that I would submit a more extended post elaborating on this tool for data cleaning.

Rad Resource – Google Refine: First of all, what is it? Google Refine is “a [free] power tool for working with messy data, cleaning it up, transforming it from one format into another, extending it with web services, and linking it to databases.” To be more explicit, it allows you to import a data set and then to clean that data set in multiple ways. If you are a programmer, Google Refine allows you to do lots more, but I am limiting my focus here to the more generally applicable function of data cleaning.

Lessons Learned – Cleaning Data: Here are three examples of ways in which I used Refine for cleaning data and a comparison to doing the same in Microsoft Excel:

  • Removing erroneous rows: I imported a financial data set that included multiple subtotal rows. All I wanted was the rows that had specific categories and transactions, so that I could work with these. The subtotal rows created problems when sorting or filtering. In Refine I chose “Text Filter” from the column heading and then identified all of the rows with “Sub” in them, then deleted these rows all at once. Verdict: This as similar to what could be done in Excel, but was easily accomplished in Refine as well.
  • Combining multiple similar responses within a field: Once your data is imported, select Facet – Text Facet from the pull down list for a particular column. A column representing all of the responses and how many times that response appears is generated. You then just select each one that you want to merge and give it a common name. Thus, I could combine “New York” “NY” “NY “ and “NNY” so that they were all “NY”. Alternatively, there is a smart clustering feature that tries to do this for you – guessing at what responses are similar and should be combined. You can then review its guesses and fix as needed before the clustering is actually done. Verdict: Both the hand-combining and clustering were accomplished much more easily than would be possible in Excel and the clustering tool’s guesses were surprisingly accurate and a huge time saver.
  • Finding Outliers: From the column pull down list of a numeric field, select Facet – Numeric Facet. This will immediately show you a small histogram with the distribution of all of the values in that column as well as the range of values in the column. Each side of the histogram has a handle that slides back and forth. Sliding the handle to display only the most extreme values to the left or right side of the histogram filters all of the rows in the dataset so you are looking only at the ones within the constricted range of outliers. Verdict: Much faster and more intuitive than options for doing the same in Excel and the combination of viewing graphically and the fields themselves provided a richer understanding.

Lessons Learned – Undo: The history feature was a godsend. It allows you to undo mistakes and step backwards through your cleaning. I also found that it gave me the confidence to try out some things, knowing that I could undo them immediately.

Lesson Learned – Download to Desktop: Google Refine can be downloaded to your desktop so you don’t have to upload your data and you retain full control and ownership of it.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Dan Jorgensen and I currently serve as the evaluation and research coordinator for the State Personnel Development Grant at the Colorado Department of Education.  My primary responsibilities involve the evaluation of six state initiatives including Response to Intervention, Positive Behavioral Interventions and Supports, Autism/Significant Support Needs, Early Childhood, Communities of Practice, and Family-School Partnerships.  Needless to say, succeeding at this endeavor requires well-developed logistics regarding data management.  The purpose of this AEA365 contribution is to outline a simple process to facilitate the organization of new or existing data structures (see figure one).

Lessons Learned: Appropriately addressing data management issues lead to more refined evaluations and analytics.  In effect, time will be spent performing evaluation responsibilities as opposed to constantly organizing, reformatting, and scrubbing the data.

  • Develop appropriate data tracking and monitoring tools. This includes, at a minimum, an event calendar with data collection and reporting deadlines; a task list to monitor day-to-day work flow and a project notebook that clearly details one’s evaluation plan and all “processes” in case the proverbial “bus” finally hits you.  If you’re managing multiple initiatives and a wide range of data collections these tools are required.

  • Extant data collection structures must be accurately located, identified, and understood.  It’s possible that your data will be collected via surveys (on-line or otherwise), rubrics, state/federal data, and other sources.  The collection dates, tools, stakeholders, and locations of this data must be reliably determined so management structures can be established.
  • Determine how disparate data sources are maintained. Typically, data is maintained at a technical level based on the expertise of the “collector”.  For example, field consultants responsible for data entry may only be comfortable using products such as MS-excel or MS-word.  This leads to data structures being organized in a flat file format and/or creates the necessity for duplicate data entry (e.g. entry of word documents into excel).  This creates a problem in that it often limits reporting options and if not organized correctly is prohibitive to relational database structures.
  • Consolidate data to a single location and format.  This allows for the gradual modification of data structures into more advanced formats and facilitates the building of reports.  For example, my preference is to convert excel files to a MS-Access format with forms being created for data entry. In addition, the reporting capabilities of the MS-Access database provide both immediate and continuous feedback concerning evaluation objectives.  The next step would possibly be converting existing databases to make them web-based if necessary (e.g. SharePoint). This step would be based on availability of funds and need for easily accessed data entry platforms.

·

<< Latest posts

Older posts >>

Archives

To top