legal apps

Don’t fear robojustice. Algorithms could help more people access legal advice

The Conversation

This article was first posted on The Conversation.

Algorithms have a role to play in supporting but not replacing the role of lawyers.

Around 15 years ago, my team and I created an automated tool that helped determine eligibility for legal aid. Known as GetAid, we built it for Victoria Legal Aid (VLA), which helps people with legal problems to find representation. At that time, the task of determining who could access its services chewed up a significant amount of VLA’s operating budget.

After passing a financial test, applicants also needed to pass a merit test: would their case have a reasonable chance of being accepted by a court? GetAid provided advice about both stages using decision trees and machine learning.

It never came online for applicants. But all these years later, the idea of using tools such as GetAid in the legal system is being taken seriously. Humans now feel far more comfortable using software to assist with, and even make, decisions. There are two major reasons for this change:

  • Efficiency: the legal community has moved away from charging clients in six-minute blocksand instead has become concerned with providing economical advice.
  • Acceptance of the internet: legal professionals finally acknowledge that the internet can be a safe way of conducting transactions and can be used to provide important advice and to collect data.

This is a good development. Intelligent decision support systems can help streamline the legal system and provide useful advice to those who cannot afford professional assistance.

Intelligent legal decision support systems

While robots are unlikely to replace judges, automated tools are being developed to support legal decision making. In fact, they could help support access to justice in areas such as divorce, owners' corporation disputes and small value contracts.

In cases where litigants cannot afford the assistance of lawyers or choose to appear in court unrepresented, systems have been developed that can advise about the potential outcome of their dispute. This helps them have reasonable expectations and make acceptable arguments.

Our Split-Up software, for example, helps users understand how Australian Family Court judges distribute marital property after a divorce.

The innovative part of the process is not the computer algorithm, but dividing the process into 94 arguments, including issues such as the contributions of the wife relative to the husband; the future needs of the wife relative to the husband; and the marriage’s level of wealth.

Using a form of statistical machine learning known as a neural network, it examines the strength of the weighting factors -- contributions, needs and level of wealth -- to determine an answer about the possible percentage split.

Other platforms follow a similar model. Developed by the Dutch Legal Aid Board, the Rechtwijzer dispute resolution platform allows people who are separating to answer questions that ultimately guide them to information relevant to their family situation.

Another major use of intelligent online dispute resolution is the British Columbia Civil Resolution System. It helps people affordably resolve small claims disputes of C$5,000 and under, as well as strata property conflicts.

Its initiators say that one of the common misconceptions about the system is that it offers a form of “robojustice” -- a future where “disputes are decided by algorithm.”

Instead, they argue the Civil Resolution Tribunal is human-driven:

"From the experts who share their knowledge through the Solution Explorer, to the dispute resolution professionals serving as facilitators and adjudicators, the CRT rests on human knowledge, skills and judgement."

Concerns about the use of robojustice

Twenty years after we first began constructing intelligent legal decision support systems, the underlying algorithms are not much smarter, but developments in computer hardware mean machines can now search larger databases far quicker.

Critics are concerned that the use of machine learning in the legal system will worsen biases against minorities, or deepen the divide between those who can afford quality legal assistance and those who cannot.

There is no doubt that algorithms will continue to perform existing biases against vulnerable groups, but this is because the algorithms are largely copying and amplifying the decision-making trends embedded in the legal system.

In reality, there is already a class divide in legal access -- those who can afford high quality legal professionals will always have an advantage. The development of intelligent support systems can partially redress this power imbalance by providing users with important legal advice that was previously unavailable to them.

There will always be a need for judges with advanced legal expertise to deal with situations that fall outside the norm. Artificial intelligence relies upon learning from prior experience and outcomes, and should not be used to make decisions about the facts of a case.

Ultimately, to pursue “real justice,” we need to change the law. In the meantime, robots can help with the smaller stuff.

About the Author

John Zeleznikow is a professor of information systems and a research associate for the Institute of Sport, Exercise and Active Living at Victoria University.

inside gcn

  • complaint button on keyboard (ESB Professional/Shutterstock.com)

    EEOC preps for #MeToo complaints

Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

More from 1105 Public Sector Media Group