' Limitations on AI in the Legal Market | MTTLR

Limitations on AI in the Legal Market

In the last 50 years, society has achieved a level of sophistication sufficient to set the stage for an explosion in AI development. As AI continues to evolve, it will become cheaper and more user friendly. Cheaper and easier to use AI will provide an incentive for more firms to invest. As more firms invest, AI use will become the norm.

In many ways, the rapid development of AI can look like an ominous cloud to those with careers in the legal market. For some, like paralegals and research assistants, AI could mean a career death sentence. Although AI is indeed poised to alter the legal profession fundamentally, AI also has critical shortcomings. AI’s two core flaws should give those working in the legal market faith that they are not replaceable.

Impartiality and Bias

            AI programs excel in the realm of fact. From chess-playing software to self-driving cars, AI has demonstrated an ability to perform factual tasks as well as, if not better, than humans. That is to say, in scenarios with clear-cut rights and wrongs, AI is on pace to outperform human capabilities. It is reasonable to conclude that AI is trending towards becoming a master of fact. However, even if AI is appropriately limited to the realm of fact, AI’s ability to analyze facts also has serious deficiencies. Similar to the process by which bias can infiltrate and cloud human judgment, bias can also infiltrate and corrupt AI functionality. The two main ways that bias can hinder AI programs are called algorithmic bias and data bias.

First, algorithmic bias is the idea that the algorithms underlying an AI program are themselves biased. Algorithmic bias exists because an engineer creating a program has his/her/their own inherent biases, and those biases can perpetuate themselves in the program they create. There is no one set path to building software. The development path and the assumptions made while designing an algorithm can have huge implications on software functionality. Two software engineers, who see the world in different ways, could theoretically attempt to create the “same” software program but notice a huge difference in program functionality.

The trouble with algorithmic bias is there is no obvious solution. Combatting algorithmic bias would require software developers to solve a problem that has plagued society since the beginning of time: identifying and eliminating inherent bias. With no obvious way to eliminate implicit bias, focusing efforts on mitigating algorithmic bias is more realistic.

In addition to algorithmic bias, data bias is problematic for AI programs. Data bias, as the name suggests, is the idea that an imperfect or flawed data set can impede the functionality of AI. Data can become biased or “corrupt” for a multitude of reasons. Three common sources of unintentional data corruption include improper data collection, improper data translation (moving volumes of data from one platform to another), and improper use of metrics to define data.

Data bias is especially problematic for automated software. Automated AI works by identifying recurring patterns in a data set, then using those patterns to make future decisions. If the data set the software is drawing from is corrupt, then the software will identify, and perpetuate biased patterns, making the program useless.

To combat data corruption, software developers have offered one way to eliminate data bias, called data sanitization. Data sanitization is the idea of removing all data that could perpetuate bias. So, data sanitization would involve removing data related to characteristics such as race, gender, and sexual orientation to try and avoid a biased data set. Data sanitization creates problems of its own, however. AI programs need a vast amount of data to function. So, depriving programs of these massive data sources could also impair the software’s functionality because the software is forced to draw from too small of a data set.

Both algorithmic bias and data bias present considerable obstacles to AI use. There doesn’t seem to be any obvious way to remedy either of these issues, which could mean lawyer use of AI will be forever limited.

 

Explainability

            The last phenomenon hampering AI’s efficacy in the legal market is called explainability. Explainability is the principle that lawyers, clients, judges, etc., need to have at least a basic understanding of what AI programs do before using them to practice law. If AI is used to aid a decision-making process, those impacted by that decision have a right to a coherent explanation of what the AI program did to reach that decision.

Explainability is a significant issue. How can we expect lawyers and judges, who may not be tech-savvy enough to understand how AI programming works, to blindly rely on incredibly sophisticated AI to perform legal work? As a lawyer, it would seem foolish to rely on something you didn’t understand to guide your decision making. Blindly trusting something sounds like a recipe for malpractice, especially because algorithmic and data bias could lead the AI to yield an incorrect result.

The Explainability issue begs the question: why don’t developers simply explain what is happening in their software programs? There are two answers. First, in the U.S, claims for more transparency about software functionality are usually met with the observation that algorithms have a proprietary nature subject to protection under trade secret law.

Second, when talking about autonomous software that can teach itself through trial and error, developers often will not explain what is happening within their software because they can’t. Autonomous algorithms have gotten so sophisticated; developers don’t know how they function. Once an autonomous program is up and running, it teaches itself. Autonomous software algorithms are evolving to a level of complexity that developers don’t understand. One cannot explain what one does not understand.

Altogether, explainability may be the single biggest issue blocking increased use of AI within the practice of law. Increased transparency about how AI makes decisions is a necessary step for such use. Greater transparency could also be a helpful way to identify potential biases within algorithms or data sets. Without a straightforward way to explain how AI works it will be difficult to justify a more widespread use within the legal profession.

*Landen Haney is an Associate Editor on the Michigan Technology Law Review.

Submit a Comment

Your email address will not be published. Required fields are marked *