' AI v. Lawyers: Will AI Take My Legal Job? | MTLR

AI v. Lawyers: Will AI Take My Legal Job?

Artificial Intelligence (AI) is changing the global workforce, generating fears that it will put masses of people out of work. Indeed, some job loss is likely as computers, intelligent machines, and robots take over certain tasks done by humans. For example, passenger cars and semi-trailer trucks will be able to drive themselves in the future, and that means that there won’t be a need for quite as many drivers. Elon Musk, the co-founder and CEO of Tesla and SpaceX, predicts that so many jobs will be replaced by intelligent machines and robots in the future that eventually “people will have less work to do and ultimately will be sustained by payments from the government.” The World Economic Forum concluded in a recent report that “a new generation of smart machines, fueled by rapid advances in artificial intelligence (AI) and robotics, could potentially replace a large proportion of existing human jobs.”

 

All of this raises the question of whether lawyers and even judges will eventually be replaced with algorithms. As one observer noted, “The law is in many ways particularly conducive to the application of AI and machine learning.” For example, legal rulings in a common law system involve deriving axioms from precedent, applying those axioms to the particular facts at hand, and reaching conclusions accordingly. Similarly, AI systems also learn to make decisions based on training data and apply the inferred rules to new situations. A growing number of companies are building machine learning models that ask the AI to assess a host of factors—from the corpus of relevant precedent, venue, to a case’s particular fact pattern–to predict the outcomes of pending cases.

 

On the policy front, what will AI bring to our legal system? It’s easy to see why AI can be a valuable productivity tool and make our legal system less expensive. AI outperforms human brains in processing speed, accuracy, and consistency. Machines don’t make mistakes because they’re tired, but humans do. LegalMind, a company that builds AI systems that “produces responsive pleadings, discovery requests and responses, and related documents that are tailored to the claims, allegations, and requests in the legal document uploaded” is quick to point out that their AI can do “A Day’s Work In Two Minutes.” LawGeex, a provider of AI-powered Contract Review Automation platform, claimed that their AI system has overtaken top lawyers in accurately “reading” legal and contractual documents and spotting risks. In a “showdown” between 20 “US-trained lawyers with decades of contract experiences” against the LawGeex Artificial Intelligence Solution, The LawGeex AI achieved an accuracy rate of 94%, while the 20 human lawyers achieved an average accuracy rate of 85%. Large businesses like Salesforce, Home Depot, and eBay now use AI-powered contract review services in their operations. It is not hard to imagine a future when AI programs are entrusted to carry out the entire contract drafting and review process.

 

Besides productivity gains, will AI also make our justice system fairer? This is a significantly more complicated question. On the one hand, judges are not immune to biases and mistakes. Research suggests that judges more often rely more on their intuitive assessments than on deliberative judgments, giving licenses to one’s biases to affect the outcome. Empirical evidence demonstrates that black defendants are incarcerated longer and more frequently sentenced to death in the criminal justice system. Judicial biases and mistakes are costly; they hurt those discriminated against, and they hurt the broader public by reducing trust in our judicial system. AI can help identify and reduce the impact of human biases and mistakes. This is because AI operates on facts and numbers alone and can be as unbiased as the data that it was provided with. Terence Mauri, a UK-based AI expert, predicts that “AI will usher in a new, fairer form of digital justice whereby human emotion, bias, and error will become a thing of the past” and that “hearings will be quicker and the innocent will be far less likely to be convicted of a crime they did not commit.” On the other hand, AI can also make the problem worse by baking in and deploying biases at scale. For example, bias can creep into algorithms precisely because the training data include biased human decisions or reflect historical or social inequities.” Therefore, we must address bias in AI. There are no quick fixes but scientists have come up with a wide variety of techniques that help ensure AI systems are fair.

 

For years, the legal industry has resisted technological disruption, but no one can ignore the speed at which technology is spreading and changing the nature of jobs. Sooner or later, we will have AI systems that are sophisticated, accurate, and fair enough to complete more and more legal tasks, such as reviewing and redlining contracts, research, and ruling small court claims. As more mundane tasks will be offloaded to machines, lawyers can focus on more high-impact and strategic work, and senior judges can focus on setting legally binding precedents, creating new laws, and overseeing appeals. Actually, AI will probably make surviving legal jobs more interesting.

* Yifan Cao is an Associate Editor on the Michigan Technology Law Review.

Submit a Comment

Your email address will not be published. Required fields are marked *