Skip navigation

Court ADR Instruction Manual

Monitoring and Evaluation of ADR Programs

Monitoring and evaluation are two different processes used to manage and improve ADR programs. They should be considered an integral part of a court ADR program, and not be thought of as tasks to be handled if there is time for them or to be tacked on once the program is fully in place. No matter what the reason for starting a court ADR program or the goals for that program, if a court refers cases to ADR through a structured procedure, it has an inherent responsibility to ensure the quality of service to litigants and to manage that program well. While a case is in the mediation program, all the responsibilities and duties of the justice system to litigants continue.

Need help monitoring and evaluating your court mediation program?
Ask RSI, the Court ADR Experts

Beyond offering resources on this site, RSI also works with courts to monitor and evaluate mediation programs. We have years of experience creating monitoring systems and conducting evaluations of court programs. For information about how we could help you develop a monitoring system for or evaluate your mediation program, please email us.

View/download as a PDF (Requires free Adobe Reader software)

Monitoring v Evaluation
Monitoring is an ongoing process to keep tabs on how a program is performing. Monitoring does not look at whether the program is achieving its goals, but rather whether the program is showing any signs of problems. Because it is ongoing, it can be used to compare program functioning from year to year. An analogy is someone with hypertension monitoring their blood pressure so that the doctor can decide at their next visit whether any changes need to be made to their medication. In this case, data is collected about the program and checked regularly to make sure there are no problems that need to be addressed. Still, monitoring is much more than tracking the number of mediations and the settlement rate. It is tracking other factors such as the satisfaction of participants.

Monitoring can provide valuable data about the program - how many cases were referred to ADR? Who was doing the referring? What was the outcome of those referrals? - that give information about the scope of the services being provided and a sense of the success of the program. In short, it provides a quality control mechanism for a program that is under the aegis of the court. It also can give early warnings that a program may be going off track. For example, if the percent of cases that settle in mediation declines sharply, it may indicate that something is off balance and the reason should be investigated. In addition to providing data about the current functioning of the program, monitoring information also can provide some of the data for evaluations that are conducted on the program.

Evaluation is more like a physical. It takes data about a program from a specified period of time to get an overall sense of how well the program is working. More factors are taken into consideration than in monitoring, and determining progress toward goals may be the key rationale for conducting the evaluation. Evaluation therefore provides more in-depth information than monitoring does. Through evaluation, a court can determine if the program's goals are being met, whether it is providing a unique and valuable service, whether certain aspects of the program should be modified to increase its efficiency and effectiveness, and whether litigants are getting a high quality of justice.

Monitoring

A monitoring system should be a part of the court ADR program design and be put in place when the program is established. However, should a program have already been started, the monitoring system could be created at any time during the life of the program. In deciding what to monitor, program stakeholders, such as those who refer cases to the program, program administrators, lawyers whose cases could be referred to the program, and policy-makers, should be consulted to find out what factors of the program make it valuable to them and what information they could use if they would need to modify the program or justify it to others. Two main questions need to be answered before putting the system into place: what will be monitored and how will that monitoring be done?

What to Monitor?
Since monitoring is about tracking the activity of a program as well as its quality, the program stakeholders should consider what they know about those two things: what they need to know about the program to know whether it is functioning well and what they need to know about activity in the program in order to know if the resources are appropriately utilized or to make necessary reports to the judiciary or elsewhere. The items to be monitored should then represent those issues. For example, the state ADR office may require information about how many cases are mediated in a year and how often those cases settled. To know if the program is functioning well, it is helpful to look at factors such as how long it takes for a case to be mediated once it is referred, or if parties consistently give poor marks to particular mediators. This provides feedback on whether cases are getting through the process quickly enough and whether any mediators need further training or observation.

Below is a list of factors a program might want to track. Programs, of course have individual designs and needs that may point to other factors being tracked. For example, in a family group conferencing program for juvenile cases, the number of cases reaching agreement may not be relevant; instead, the program may want to track the number of juveniles who complete the contract agreed to at the conference.

Number of cases referred. This provides information on how often ADR is turned to for the resolution of cases, as well as providing a baseline for the question of the effectiveness of referral by providing a ratio of referrals as compared to ADR completions. This data can be tracked through the entry of the order of referral to ADR.

Number of cases going through the ADR process. This is the raw data for the use of the program. From this, further information can be elucidated, such as percent of cases using the program, the cost per case, and so on. This data can be tracked through reports filed by the neutrals after the ADR process has been completed - or after the process has been terminated in some other way (such as a party not appearing).

Outcome of cases going through ADR. In mediation, settlement rates are an indicator of the amount of time and money being saved by the court through referral to mediation. In non-binding arbitration, rejections of award and trial rates provide the same type of information. This data can be tracked though reports filed by the mediators, by the motions filed by the parties, or by the entry of court activity.

Type of cases or issues going through ADR. Much can be learned from knowing what cases are going through ADR. Keeping tabs on this through data supplied in orders of referral or by neutral reports can show which cases are more appropriate, more likely to settle or to simplify the case, and more likely to lead to satisfaction of the parties.

Who the neutrals are who are conducting the ADR sessions/hearings. Again, this provides raw data. Are mediations being done by a select few or by a wide number of mediators? Are their backgrounds or other factors similar? Are the awards of arbitrators with particular backgrounds being rejected less than others?

Who is doing the referring and what is the result of those referrals. This provides raw data to show which judges are referring cases and how effectively they are doing so. The data can show that everything is running smoothly. On the other hand, it could show that the judges would benefit from additional education on the ADR process.

Reasons cases referred do not end up going through the process. This provides an ongoing picture of the effectiveness of the referral system. Is referral leading nowhere? Are cases settling before reaching ADR? Are cases being referred that are inappropriate, such as referring an impaired party to mediation? This information can be collected from the neutral reports or other forms.

How well specific neutrals are doing. Monitoring the neutrals is essential to the quality of the program. One method for doing so is through participant evaluations. At least one study has shown that attorney assessment of mediator performance has accurately reflected the mediators' skill levels.

In addition, courts may want to consider monitoring:

Amount of time the ADR process is taking. If the parties are paying for mediation, this can give some information about the amount of resources they are spending on mediation. If staff mediators are conducting the mediation, this information can provide the courts with staff oversight/budgeting information.

Participant perception of their experience with ADR. It is the court's responsibility to ensure that its clients are receiving the best possible service. This can be done through a periodic evaluation of the ADR program (see below), or it can be done on an ongoing basis. It is recommended that the latter be done in order to see trends in the responses. However, snapshots at specified time intervals could also work well.

Percent of cases by type being resolved

Issues being mediated and resolved

How Will Monitoring Be Done?
One of the most difficult issues facing programs in monitoring is maintaining the system over months and years. This is particularly true when program or court staff is stretched thin by a variety of responsibilities. When setting up the monitoring system, a number of logistics should be worked out prior to starting the monitoring process.

Who will be in charge of the monitoring? Whose job is it to make sure that all the monitoring procedures are being followed and that reports are being generated and distributed? Without someone who is on top of this and who is able to enforce or get enforcement of the monitoring procedures, it is very likely that the monitoring process will fall by the wayside, the necessary forms will not be returned, the data will not be entered, and the reports on the program will not be generated.

What is the best way to gather the data? Who will need to fill out forms about the ADR process and what will be on the forms? Will the clerk's office gather all the data for the evaluation or is there an ADR office that will collect it? Or do the two offices need to coordinate data collection? The data gathering methods should emphasize efficiency as well as effectiveness - that is, what the easiest way is to get this information and how reliable and complete it will be.

What will the process be to ensure all forms are returned by mediators and/or attorneys? The most common problem faced by programs or courts in collecting data is in gathering forms from mediators and participants. When coming up with a process for this, an effort should be made to come up with a process to increase the likelihood of forms being returned. Some courts place all responsibility for forms in the hands of the mediators. Others impose sanctions on those who do not comply, while others follow-up with mediators and attorneys to increase the return rate.

How will the data be tracked? Will a database be created, or a spreadsheet? If so, is there someone on staff who can do this? Will funding be necessary to pay someone to create the selected form of data input? The answer to these questions could influence what information will be gathered.

Who will enter the data and create reports? Monitoring does present the need for significant staff time to enter the data into a computer and to create periodic reports. Strategizing for this is essential when deciding how and what to monitor.

Who will receive the monitoring reports? Knowing this will help to determine the data that will be collected and how the report will be written. If the judges are going to receive the reports, what information should they receive? What information will administrators need?

Use the Monitoring Information
If a monitoring system is going to be put in place, and if effort is put into tracking the program, the results of the monitoring must be used. To gather data and then store it in a drawer is a waste of resources. So, be sure to create that database or spreadsheet, spend the time to enter the data into it, and write and distribute that report regarding the state of the program at regular intervals. Then informed decisions can be made to improve, expand, or otherwise change the program so that those who use it get the highest quality services possible.

The frequency of reports depends upon the size of the program and who is receiving the report. A program that sees a lot of cases would want to report out at shorter intervals than one that sees relatively few. For instance, a large program may report to the presiding and chief judges on a monthly basis, while a small program may not have enough cases to report so frequently, doing so on a quarterly basis instead. Reports may go quarterly or annually to stakeholders and the committee in charge of the program.

Accountability also calls for providing program monitoring reports to the public. This can be done through the court's web site or the local bar newspaper, for example.

^ back to top

Evaluation

Evaluation provides an in-depth analysis of the strengths and weaknesses of a program, its capacity to achieve the goals assigned to it, and its impact on those who use it. Evaluation may be done on the process the program follows, the outcomes of the program, and its extended impact.

EVALUATION PLAN

The evaluation plan contains the why, what and how of the evaluation itself. It hinges on the goal (the why), which will dictate much of what to evaluate and how to do it. The goal leads to the question of what will be evaluated. The two together - the goal and what will be evaluated - lead to the question of how the evaluation will be carried out.

Initial considerations
Who Should Be Involved in Deciding What to Evaluate and How to Do It?
Many people have knowledge that would be helpful in planning evaluation systems, as well as specific interests in the services that are delivered. For this reason, it makes most sense to involve representatives from a number of stakeholders in the decision-making process.

Staff - If the program is already in place, administrative staff and staff mediators have expertise about the program that can contribute to the evaluation design. Getting their input on evaluation will also help to get their cooperation with the evaluation. If the program is in the design stage, getting input from court staff will help to design the evaluation in a way that is more likely to make it flow smoothly with existing court systems and with the new court ADR system. Another benefit to staff input is that the process of deciding what to evaluate may get them thinking about the program in a different way. When evaluation procedures are routinized and understood by the staff, evaluation will have more impact.

Policymakers/decision-makers - It is best to get their input on what is important to them. If the findings and recommendations from the evaluation do not touch upon what they find to be important, they will not be inclined to use the findings and recommendations from the evaluation to improve the program or to garner additional support for it.

Users of the program, especially lawyers if they participate - They also have their own perspective on what the program should do. Findings and recommendations from the evaluation should take into account what is important to them because they are the ones who will be most affected by any changes to the program.

Those who will be cooperating with/assisting the evaluation - If there are people outside the program who will be involved to any great degree in the evaluation, such as court clerks, they should be included in the evaluation design process because 1) they know what the challenges are to getting the data and 2) they need to buy into the process if they are going to be asked to add to their work load.

Including so many interested groups does have its drawbacks, particularly that so many ideas of what is important will make evaluating the program complex, thus requiring too much time and expense to be feasible. Gaining consensus on what should be included in an evaluation system will be one of the bigger challenges of the development process.

What are the constraints on the evaluation?
Knowing what constraints exist in doing the evaluation will help to determine what is possible, and most likely will maintain focus on the most important issues. It will also affect the questions asked, the sample size and other aspects of design.

Money and Time
The two greatest constraints on evaluation design are money and time. Most courts have limited resources to spend on evaluating their programs. This limits the complexity and extent of any evaluation. Time is not just the amount of time that staff may be required to spend on the evaluation, but timeliness as well. It is much better to do a limited evaluation and communicate findings and recommendations when they are still timely enough to be useful than to have a large, complex evaluation that is published too late to be of any use. However, if the evaluation needs to be more long-term, interim reports can be provided.

Staff
No matter who does the evaluation, some staff time will be needed to conduct it, whether for gathering data, entering data, being on hand to explain processes, etc. If the program is, like many, stretched in terms of staff time, how and what the staff will be doing in support of the evaluation is an important question to answer before deciding how to proceed with the evaluation.

Another staff issue is staff capability. Is staff capable of conducting an evaluation without outside help? If staff is not capable, the evaluation plan will need to take this into account by simplifying the evaluation, by working with an outside evaluator, or by incorporating a staff training component in which the staff learn how to conduct a competent, high-quality evaluation.

Data
Some evaluation questions might require data that is either difficult or impossible to gather. This is particularly true if the question requires baseline data for comparative purposes. Baseline data is case data from before the start of the mediation program, such as how long it was taking cases to get to resolution, how many motions were being filed per case, the outcomes of cases, and so on.

Political
There are times in which certain evaluation questions, or the findings from them, may invite a backlash or conflict from various factions. Understanding what these could be ahead of time will help to approach such questions sensitively.

All of these constraints have an effect on the evaluation goals, design and process. Decision-making regarding the evaluation should take existing constraints into consideration. For example, the overarching evaluation questions will require certain data. Each of the above constraints may make it unfeasible to collect the data to answer those questions, which will then lead to changing or eliminating the evaluation questions. In turn, that will lead back to the question of how to accomplish the goals of the evaluation.

Evaluation Goals
The evaluation goal is always the first consideration in evaluation. Without knowing what the goal of the evaluation is, the evaluation will lack focus and meaning. Focus is particularly important when a program or court has few resources at hand. The following are some examples. More than one may be appropriate for a particular evaluation.

Assess whether program goals are being met
Ideally, a program is created with a set of goals it should accomplish, such as reducing recidivism for juvenile cases, or reducing court backlog. The court cannot know if the goals are being met without conducting an evaluation.

Find out if the program is functioning as it is meant to function
This is about the process more than the outcomes from the program. Are the policies and the procedures for the program being followed? For example, if the rules say that mediation is to take place within 60 days so that mediation does not end up lengthening time to case closure, is this happening? If not, the next question would be, why not? This is helpful in two ways: 1) to find out if any policies or procedures need modification, and 2) to give insight into the findings about whether goals are being achieved or why certain outcomes are less positive than they could be.

Find out if there are any areas that need improvement
This important goal seeks to correct problems with the programs that affect their effectiveness and efficiency. Say, for example, that survey responses show that a large percentage of participants feel they are being coerced in mediation. Given that one of the pillars on which mediation is built is self-determination, this should raise a red flag that needs to be addressed.

Provide data to funders (including legislatures and supreme courts) about how well the program is serving its clients
Funders want to know if their money is being well spent. They want to know that the program they are funding is functioning well and is doing what it purports to do. For this reason, funders often require some form of evaluation of the program when they grant money. Similarly, programs would do well to prove to all stakeholders that the program effectively addresses a need or enhances the services that are provided.

Help in decision-making regarding the program or others - expand program, add new programs elsewhere, end program, change parameters of program (moving from voluntary to mandatory, start charging for services, etc.)
Even when a program is running well, there may be concerns that the amount of money being spent is not worth it. An evaluation outlining all the benefits the program will help deal with these concerns. It should be very clear, however, that an evaluation should not be conducted simply to prove a positive. Evaluations should be done objectively, with full openness to any results, good or bad, that may be found. If not done objectively, the evaluation results will be seen as suspect.

Increase understanding of some aspect of mediation on a more global level
This is the least likely reason to conduct an evaluation; however, there may be a specific question about the effectiveness of mediation in a given setting or a particular impact of mediation on participants, the court, or the community that has not been explored previously. Examining such items could serve to help other courts and other programs as well as the one being evaluated.

Use goals to determine what the evaluation will focus on
The goals of the evaluation essentially dictate the general focus of the evaluation. Depending on the goals, the overall focus will be on goal achievement, process, program outcomes, or a combination of the three. Below, the three most likely evaluation goals are connected with the appropriate focus.

Goal: Determine if program goals are being achieved
If the goal of the evaluation is to determine whether program goals are being met, then, naturally, the focus should be on goal achievement. Such an evaluation should examine not only the progress toward the program's goals, but also the possible reasons those goals are not being met. These reasons may be procedural (e.g., mediation not taking place within the specified time frame), material (e.g., not enough funding to pay enough mediators), or conceptual (e.g., the goals are impracticable or unattainable).

A goal-based evaluation can lead to a pat on the back, a tweaking of the program, or a complete rethinking of the rationale for the program. It can lead to fundamental questions about the goals of the program. Since it looks at goal achievement, it is best done after the program has had time to mature and to achieve those goals.

Goal: Improve program functioning or see if the program is working as planned
These evaluation goals point to a focus on process. This type of evaluation examines the process of getting cases to and through mediation, as well as the quality of the mediation, whether it is working as planned, and why it is not working as planned. It is often overlooked in favor of examining outcomes, but it is essential for program improvement. It can also help to explain the results of goal-based and outcome evaluation.

Evaluations that focus on process ask questions about exactly how the program's services are provided. A process evaluation may be done in the beginning of the program when procedures have first been put into place and/or after the program has been in place for awhile and either through practice or purposeful changes, the process has been modified to a great extent.

Goal: Provide data to funders or help in decision-making regarding the program
These evaluation goals are best suited to a focus on outcomes. Outcome evaluation looks at the benefits that the program provides. In ADR, this may be settlement, sense of procedural justice for those who participate, cost savings, or reduced conflict between parents, among others.

Goal-based and outcome evaluation may overlap if the goals are particular levels of a desired outcome. For example, a goal for the program may be to reduce recidivism by 10%. Reduced recidivism is a benefit of the program as well as a goal. Process evaluation may also overlap if it is necessary to discover why particular goals are not being met.

For further discussion on types of evaluation, see Basic Guide to Program Evaluation at http://www.managementhelp.org/evaluatn/fnl_eval.htm.

Use evaluation goals to determine evaluation questions
The next critical step is to frame the questions that are to be answered through the evaluation. In this context, "evaluation questions" does not refer to specific questions that appear on evaluation forms. They instead articulate the goals of the evaluation. For example, if a goal is to reduce judicial workload, the associated question may be, "What is the average number of motions heard per mediated case as compared to cases not mediated?"

These questions should be assessed in terms of their relevance to the program, the stakeholders, and the goals of the evaluation. As the questions are being constructed, think about the following:

  • What are the goals of the evaluation? What questions need to be asked in order to meet those goals?
  • What information is needed by those who will use the evaluation?
  • What information will be useful to other stakeholders, such as participants and mediators?

As noted above, stakeholders should be a part of the process of developing these questions.

Use evaluation questions to determine evaluation methods
Once it is clear what questions are to be answered through the evaluation, the next question is what methods will be used to answer these questions. Methods include questionnaires, interviews, observations, and collection of case data, among others. If, for example, one of the evaluation questions is whether the mediators are being neutral, several methods are possible, including participant questionnaires assessing mediator neutrality, more in-depth interviews of the participants, direct observation of mediations, and video of mediations. Each has its strength and weaknesses.

When deciding what methods to use, think about the constraints that were identified earlier. Some methods are much more time consuming than others. Interviews and observations take a lot of time at all stages of the evaluation: data collection, data entry, and analysis.

Chapter Six in Taking Stock: A Practical Guide to Evaluating Your Own Programs provides a good overview of the possible evaluation methods.

Decide who will do the evaluation
There are three possibilities of who will conduct the evaluation: staff may undertake the entire evaluation, staff may work under the supervision of an independent evaluator, or an independent evaluator may conduct the evaluation on behalf of the court. Time and money constraints have a large effect on the decision of who does the evaluation. If staff is stretched thin, they may have no time to conduct an evaluation. Additionally, they may not feel competent to tackle one. On the other hand, if the court has no money to pay for an outside consultant, it may be incumbent on the staff to conduct the evaluation. With some creativity, however, the court may be able to deal with these conflicting constraints. Contracting with a graduate student or receiving funding through grants may offset the cost of a consultant. Moving around staff responsibilities may help to free up staff time. Training may increase staff competency.

Outside consultant
Hiring an outside consultant has two benefits: expertise in evaluation and objectivity. Both of these will give greater credibility to the evaluation and its findings. Expertise in evaluation design and statistical analysis help to ensure that the findings from the evaluation truly reflect the state of the program. The fact that the evaluation is conducted by an independent evaluator helps to ensure that the findings are not biased. However, evaluators have varying degrees of capability. Selecting the right one is important. Consider the evaluator's past work, particularly whether she has had prior experience evaluating ADR programs, or demonstration of an understanding of such programs. Also consider the evaluator's reputation and background.

Once an evaluator is chosen, she should be up front as to what is possible based upon the constraints identified, as well as any weaknesses the evaluation may have based upon those constraints. The evaluator should also maintain communication with the court, and discuss any problems with the evaluation as they arise. Also be aware that evaluator ethics require that the evaluator not bend the findings to fit a particular agenda or to show the program in a different light.

The W.K. Kellogg Foundation's Evaluation Toolkit has a chapter devoted to the topic of hiring and managing an evaluator.

Court staff
Despite what was stated above, self-conducted evaluations should not be ruled out as necessarily lacking credibility. If the evaluation is done with proper methods, documentation of results, and sound analysis, the evaluation can be considered to be credible. The strengths that self-conducted evaluations bring are an in-depth knowledge of the program, the court, and the constraints that exist. If staff is going to conduct the evaluation, they should have a good foundation in evaluation design, methods, and analysis, and a solid understanding of basic statistics. If the evaluation is comparative or involves complex groupings, the use of statistical software is a must. The most common are SPSS and SAS.

Taking Stock: A Practical Guide to Evaluating Your Own Programs is a nice place to start learning more in depth about how to conduct an evaluation.

Evaluation design
A number of questions arise in relation to the design of the evaluation. These include length of the evaluation period, which cases will be included in the evaluation, whether the study is comparative or not, and how they will be compared. This section will touch briefly upon each.

Length of evaluation period
The time during which data is collected should be as short as possible in order to place the smallest burden as possible on the court and to provide timely results. However, it needs to be long enough to get the data necessary for analysis. For programs that have few cases going to mediation, the evaluation period will necessarily be longer than for programs with many cases. Comparative designs or those examining differences between particular cases will require more data and thus more time than simpler designs.

Which cases to include
There are two issues here: whether a random sample should be pulled, and whether certain cases should be excluded. For the former, the answer is based on the total number of cases available. For example, if mediated cases are being compared to non-mediated cases, the cases that were not mediated may far out number the cases that were. This may point to using all mediated cases and a random sample of non-mediated cases.

The second issue pertains more to the possibility that some cases may not represent the true picture of the program's goals. For example, if the evaluation is comparing time to case closure for mediated and non-mediated cases and cases sent to mediation do not include those involving complex litigation, then the sample of cases that did not go through mediation should exclude complex litigation cases.

To compare or not to compare. . .
One basic question is whether an evaluation will be comparative in nature. The answer to this question is strongly connected with the goals of the program. If the goal for the program is to see a change in something, the evaluation has to compare that factor at points in time to see if the factor has changed. Comparative designs are necessarily more complicated and costly than designs without a comparative component because more cases are needed for the comparison and because getting the correct comparison groups can be challenging. However, if the goals for a program are comparative, such as reducing time to permanency, increasing compliance with parenting agreements, or reducing court time spent on cases, the only way to figure out if the goals are being met is to do a comparative evaluation.

If, on the other hand, the goals for a program are not comparative in nature, there need not be a comparative component. Such goals may be to provide services that the participants find to be fair and satisfying, to keep compliance at a particular level, or to reduce the number of cases going to trial by a certain percent.

Comparative designs
A number of designs can be used to compare two groups of cases or people. The most important thing is that these groups be statistically valid. The most valid and reliable comparison is between two randomly assigned groups. Random assignment assures that the cases in each group are similar. However, random assignment is generally not possible to do in a court setting. Fortunately, there are a number of other quasi-experimental designs that can also yield good results. A good discussion of these designs can be found at http://www.socialresearchmethods.net/kb/destypes.php.

Evaluation Design Tools
Briefly, a number of tools are available to assist in determining what and how to evaluate a mediation program. Among them are:

Evaluability assessment
This tool helps to determine if evaluation will be feasible and if it will yield actionable results. It looks at the readiness of a program to be evaluated and the probability of the evaluation leading to improvements in the program. For more information, check out http://www.jrsa.org/pubs/juv-justice/evaluability-assessment.pdf.

Logic models
These are models of how the program works. They help to understand the program and to figure out what questions should be asked in order to determine if it is functioning properly. "Everything You Wanted to Know about Logic Models but Were Afraid to Ask" (.pdf) provides a good summary of logic models. Further information about how to create one can be found in the paper, "Making Logic Models More Systemic: An Activity" (.pdf).

Prior studies
Prior studies can be used to help decide on appropriate questions, to see what evaluation designs have been used in what circumstances, and to compare the findings from different programs. The most important thing to consider when using prior studies is their quality. A good place to start finding high-quality evaluations is the Court ADR Effectiveness page in the Court ADR Research Library.

EVALUATION PROCESS

Before starting the evaluation, it is important to get everyone on board who will be affected by the process. All staff should be informed of the evaluation if they have not already. Their role in it should be explained, and any concerns they have about the evaluation should be discussed. Stakeholders, such as judges, neutrals and attorneys, who may be called upon to cooperate with the evaluation through surveys or interviews should be made aware of the evaluation and their role as well, and their cooperation should be requested by the chief or presiding judge. Others who may need to be informed are court clerks, bailiffs, and other administrative staff of the court.

The Evaluation
Pilot Period
Always begin with a pilot period. During this time, test all survey instruments to make sure that they are reliable and valid. This is usually done by having participants complete the survey and then discussing with them their understanding of the questions and their reasons for the answers they gave. Fix any questions that are not understood, cannot be clearly answered, or are understood differently by the participants.

At this time, also do a data collection run-through. Try to collect the data that will be needed for the evaluation. See if the data can indeed be collected and if so, if it can be feasibly used. Create a database for data entry if necessary and fix any bugs, make sure that data can be easily and accurately entered.

Study Period
During the study period it is important to constantly monitor the data collection and entry processes. This keeps everyone on track, makes sure that the data is being collected on a consistent basis, and ensures that any problems in data collection or entry are discovered early on and fixed. If an independent evaluator has been hired, this person should be supervising the entire process.

Communication is also important. During the course of the study there should be ongoing communication between the evaluator (independent or staff) and the program and court about how things are going with the evaluation. If the evaluation is being done by someone outside the program, this communication is essential.

Analysis
If program staff is conducting the evaluation, the program will be responsible for analyzing the data. This step is as important to the quality of the evaluation as the data collection is. One of the easiest traps to fall into is to assume that positive changes are caused by the program. There could be a number of variables that affect differences in outcomes between individual cases. External factors, such as a change in rule, a difference in how judges manage cases, or even interventions by other programs, can affect the outcomes of cases or influence program impact.

Reporting
Make evaluation credible
Make sure that the evaluation is not an advocacy piece. Objectivity in presentation is essential for the evaluation findings to be accepted. Also present a thorough methodology and all the data that led to conclusions. If the evaluation was comparative, present data that shows whether any differences were significant.

Make the report accessible and actionable
The evaluation does not mean anything unless it is read and any recommendations are acted upon. This requires wide dissemination of findings and recommendations in many forms. Do not rely on a long written report alone. Busy professionals do not tend to have the time to wade through pages and pages of data. Some ideas for increasing accessibility are:

  • Write an executive summary. The short version of the report is much more likely to be read and can be disseminated to a wide audience at less expense than the full-length report.
  • Think about presenting information orally as well as in a written report. Cater the report presentation to the information flow of the court and program. If information is often exchanged at monthly meetings, present a report at that time. Visuals, including charts and graphs, help as well. Discussion can lead to greater understanding and use of the findings and recommendations.
  • Provide specific recommendations about how to change the program. Specific recommendations are more likely to be acted upon than vague ones. Include them in the executive summary.

Think of the audience beyond the court, legislature, or whoever asked for the evaluation
Other courts may use the evaluation as a way to judge whether to create a program or to justify a similar program in their own jurisdiction. Other evaluators may use it to inform them on how the program they are evaluating compares to other programs. Practitioners may use it to see if their manner of practice is effective. Because others will be using the evaluation, provide detailed information on how the program is set up to function, and how it really functions. Provide detailed information about how the evaluation was conducted and any questions that could not be answered.

Use the Evaluation
An evaluation is basically a waste of time and money if it is not used to make improvements, educate stakeholders, or help other programs . Don't be afraid of the findings and recommendations, even if the findings are negative. Negative findings help a program to grow and improve - if they are used.

Once the evaluation is completed, sit down and figure out how to use the findings and recommendations. Agree upon which recommendations are necessary and feasible to act upon and implement them. If the evaluation points to needed policy changes, work with policy makers to make those changes.

Conclusion
An evaluation that is done well will give invaluable information to the court that can help to improve the quality of the program, gain further funding of the program, assure stakeholders of the program's worth, and, most importantly, enhance the experience of the program participants. It can also add to the overall knowledge about the impact of court ADR on the courts, litigants, and even the community. In the end, well-conducted evaluations are essential to the health of the ADR field.

Related Links
What's Your Survey Telling You
Copyright 2007. This article may be downloaded for personal use only. Any other use requires prior permission of the author and the American Institute of Physics.

^ back to top