Friday, January 16

Once, a junior associate made a joke about their job consisting mostly of nodding during meetings and reading documents in dimly lit rooms. It wasn’t quite a joke. For many years, those at the bottom of the legal hierarchy were left to handle the tedious tasks of fact-checking, document drafting, and never-ending reviews. That ladder started to warp today.

The legal workflow is gradually being redirected by artificial intelligence. These days, tools can scan thousands of court cases for patterns a human might miss, evaluate contracts for risk, and summarize depositions. It’s incredibly fast. Sometimes the accuracy is remarkably efficient.

TopicDetail
FocusIntegration of AI in legal services and courts
Key PlayersOpenAI, Harvey AI, LexisNexis, CaseText, DoNotPay
Legal ImpactDrafting, research, litigation assistance, risk assessment
Industry SentimentCautiously optimistic, with debate on ethics, accuracy, job shifts
Future OutlookAccelerated adoption, new legal tech startups, revised legal education
SourceHarvard Law Review

One such AI system is being developed by Capita, a startup co-founded by Ben Su, a lawyer who is now an entrepreneur. It completely transforms the lawyer’s desk rather than merely plugging it in. Legal firms waste too much time “optimizing broken workflows,” according to Su. He believes that ineffective models shouldn’t be made more tolerable by legal technology. It ought to test them.

He is not by himself. With promises of quicker, less expensive, and more intelligent tools, dozens of venture capital-backed startups are joining the legal market. By simplifying corporate transactional law, Harvey, another AI company, recently raised $100 million and reached a $1.5 billion valuation.

However, despite the growth of these tools, the fundamental duties of a lawyer continue to be stubbornly human. As of yet, no algorithm is capable of responding to a hostile cross-examination in court. No chatbot can empathically sit across from a bereaved family and help them with estate planning. Not in a convincing way.

Twenty-year probate lawyer Jide Afolabi views AI as a very effective helper rather than a replacement. He described how client intake has evolved, starting with AI-populated forms after previously being conducted via notepads and interviews. However, he is always available to confirm, clarify, and make a decision.

He informed me that for every document that AI creates, he still prints the initial draft. His words, “You need to smell the paper sometimes,” were accompanied by laughter. “That’s where the mistakes hide.”

The present is defined by this combination of human and AI. The productivity of attorneys who use technology for labor-intensive tasks has significantly increased. However, as customers frequently attest, they are valuable because they possess judgment, trust, and an ethical instinct that machines cannot match.

I once saw a senior attorney hesitate before advising a founder who was thinking about filing for bankruptcy. There was more silence than usual. It wasn’t because he was at a loss for words, but rather because he was aware of the impact of his response. That silence? The weight would go unnoticed by an AI.

Carey Lening, a former lawyer and data privacy specialist, is currently experimenting with AI to automate her own processes. She has created templates to review contracts, written scripts to assist in summarizing case law, and clearly recognizes the gap between ability and comprehension.

She told me, “People think AI tools are plug-and-play.” “But teaching the system what matters requires a human.” She once worked on a prompt to identify problematic clauses in NDAs for three weeks. Only because she continued to modify the language and reasoning did the outcome turn out well.

She points out that the tool is not the danger. The lawyer has a blind faith in it. The same AI that makes useful edit suggestions is also capable of creating fake citations or hallucinating court cases.

Lawyers were fined for submitting AI-generated briefs with fictitious rulings in the New York case, which is now taught as a warning. This is no longer theoretical.

Regulatory boundaries exist as well. Professional responsibility and licensing regulate the practice of law. AI is unable to meet the standards. It is not responsible for advice. Lawyers, not their chatbots, may be sued by clients.

The industry is changing, though. AI literacy is now taught in legal education. Tech ethics, algorithmic bias, and prompt engineering are being taught in law schools. These days, a recent graduate is just as likely to work with engineers as with partners. The work has evolved.

Some businesses object. Despite efficiency gains, they maintain margins by raising hourly rates rather than cutting hours. Some have adopted new billing models, such as monthly retainers that are based on service rather than time.

The true change may come from competition rather than regulation. If AI-enabled workflows cause three businesses in a particular industry to lower their prices, the others will follow. Maybe slowly. However, they will.

As long as it doesn’t compromise care, Afolabi says he is happy about the change. He stated, “People come to us when their lives are messy.” “Just because the machine gets better at filling out forms doesn’t mean that.”

The best legal technology doesn’t attempt to replicate the mind of a lawyer. It takes away the distraction so that attorneys can concentrate on what really counts. The lawyer starts where value begins rather than creating interrogation questions from the ground up or chasing signatures across platforms.

This is possibly the most obvious way forward. Not AI attorneys taking the place of human ones. However, attorneys that use AI, are aware of its limitations, and create tools that support judgment rather than replace it.

People will always confuse speed for wisdom and process for insight. Thankfully, neither is the foundation of the law. Care, interpretation, and the guts to make gray area decisions are its foundations.

Share.

Comments are closed.