Best Practices for AI in Litigation

January 30, 2025

Best Practices for AI in Litigation

By Oscar Shine, partner, Meredith Nelson, partner, and Sarah Chase, associate

We are several years into the arms race among tech incumbents and upstarts to develop new artificial intelligence applications. Many of the new tools are marketed directly to litigators, promising to make dispute resolution more efficient and less expensive. Selendy Gay tasked a dedicated team of lawyers and other legal professionals with testing litigation-specific applications of AI.

Our verdict: While AI can enhance the productivity of individual attorneys, we have yet to see truly game-changing applications that can replace the skills of top litigators. Despite advances, AI tools remain unreliable and error prone in ways that can prove disastrous for inattentive lawyers. Litigators and their clients should proceed with caution.

Benefits & Limitations of AI in Litigation

Currently, we see four key AI uses in litigation:

  • eDiscovery, including document review and fact investigation
  • Drafting and editing work product
  • Legal research and analysis
  • Predicting litigation outcomes

eDiscovery

Our attorneys are harnessing AI to make document reviews more efficient and to speed up fact investigation. By integrating AI tools into our document review workflows, our attorneys use AI like a private search engine that responds to natural language questions about documents. The tools synthesize hundreds of documents in just a few minutes to provide answers to specific questions. The tools also cite to specific documents to support their answers, enabling attorneys to get up to speed on issues quickly.

Major players and startups are also developing tools with the potential to streamline other core discovery tasks, including coding documents using a case team’s document review protocol, predicting privilege assertions in documents based on the face of the document, mapping the relationships among participants in a conversation or email thread, and creating first drafts of privilege logs.

Written Work Product

AI tools also save time in creating work product. Automated first drafts of call summaries or internal meetings can reduce the hours spent on administrative tasks, allowing lawyers to focus on strategic work and collaborating with clients.

Organizations need to weigh those benefits against potential risks. For instance, if employees are creating records of meetings, those records may become discoverable in litigation. Similarly, AI tools integrated into existing software—such as Microsoft’s Copilot—create new risks senior lawyers or management may not realize that employees have used AI to complete tasks. Organizations need to stay vigilant to make sure employees know when it is appropriate to use AI.

Legal Research and Analysis

Generative AI also enables attorneys to conduct legal research more efficiently. Industry-specific tools exist to summarize case law, statutes, and regulations, reducing the time it takes to filter results in legal databases. Non-industry specific tools can also be employed to improve efficiency by, for instance, quickly summarizing a detailed article or synthesizing many articles to provide an explanation of a complex topic. In theory, these tools enable attorneys to quickly assess which points require further attention.

Our attorneys have had mixed results with AI for legal research. Some report that, when properly prompted, the tools give valuable answers identifying controlling authority and suggesting follow-up points. Others, however, find that even the best research tools remain prone to so-called “hallucinations,” where the tools produce plausible-sounding—but totally false or inaccurate—responses. The tools can misstate case holdings, rely on law from inapposite jurisdictions, cite nonexistent cases or laws, and rely on outdated precedents. These hallucinations occur most often in situations where the law is unsettled—for example, in reconciling diverging district court opinions on the same topic. Recent academic studies found that AI research tools hallucinate between 17% and 33% of the time.[1]

Predictive Analytics

Lastly, predictive analytics tools seek to forecast litigation outcomes. These tools work by synthesizing datapoints across many cases to predict likelihoods of success based on factors like jurisdiction, issue, and judge. They also can help predict decision-making patterns of specific judges and analyze the language most frequently used by a judge when granting or denying a motion.

The promise of predictive analytics is to automate activity our firm already performs—when we look at a case, we look closely at decisions by our judge and in our jurisdiction to understand what kinds of arguments get traction and to try to predict outcomes for particular motions and the litigation overall. This is time consuming work, and AI may offer some efficiencies.

Conclusion

Though they show promise, AI tools simply aren’t sufficiently reliable today for high-stakes litigation. Models consistently struggle with “professional judgment” and navigating nuance—and those limitations increase as issues and cases become more complex.

That is not to say the tools are useless. We have found them helpful for getting a first cut of answers and for brainstorming arguments and new angles on issues. But human engagement remains critical. That means thoroughly training employees on the proper use and limitations of AI. Only when combined with human intellect and judgment can today’s AI programs truly deliver an edge over your adversaries.

[1] See V. Magesh et al., “Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools,” June 6, 2024 (Preprint), https://dho.stanford.edu/wp-content/uploads/Legal_RAG_Hallucinations.pdf; see also M. Dahl, V. Magesh, M. Suzgun, et al., Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models, 16 J. Legal Analysis 1, 64 (Jun. 2024), https://arxiv.org/pdf/2401.01301.