AI in eDiscovery: Outside Counsel’s Guide to Decoding Accuracy
August 26, 2024
By:
Summary: Decode the benefits of AI accuracy to your corporate clients with our guide. We provide a primer on the different types of AI in eDiscovery and provide examples of privilege review from real-world matters.
Litigators and eDiscovery practitioners are facing a pivotal moment in the evolution of their practice groups with the fast-paced advancements in AI.
Legal technology experts have long understood that responsibly developed AI—with proper scoping, iterative testing, and validation—can significantly improve the accuracy of analytic tasks in eDiscovery. Today's AI technologies, especially large language models (LLMs), are pushing the boundaries of that accuracy even further.
Outside counsel in eDiscovery matters often play a pivotal role in guiding corporate counsel on their AI journey. To do so, you’ll need to understand the fundamentals of AI accuracy. For example, you’ll want to be able to give feedback on when it makes sense to leverage AI (and when it doesn’t), explain how LLMs differ from machine learning and other traditional review approaches when it comes to accuracy, and translate how boosts in accuracy by AI can help in-house counsel achieve their eDiscovery goals.
Consider this your guide to help you decode the benefits of AI accuracy to your corporate clients. We’ll do a quick primer on the different types of AI technology in eDiscovery and then look at several examples of privilege review from real-world matters.
Of course, AI's eDiscovery applications extend well beyond privilege, but we’ll zoom in on privilege review for the purpose of this article (our five-year track record of developing and using LLMs for privilege review has given us a treasure trove of empirical and comparative evidence that allows us to easily demonstrate and explain the accuracy advantages in this area).
Understanding the differences in AI eDiscovery technology
In today's eDiscovery context, there are two different types of AI built with LLMs. Both can understand the nuances and peculiarities of language as people actually write and speak it by learning from vast stores of information (see our previous post for a deeper dive into LLMs and the two types of AI).
- Predictive AI leverages prior data to predict outcomes on new data. As outside counsel, you’re most likely familiar with the concept of predictive AI, from using it in matters to provide a metric on the likelihood a document falls under a certain classification (responsive, privileged, etc.) based on a previously coded training set of documents.
- Generative AI creates novel content based directly on input data. A newcomer to the AI arena in the last two years, effective use cases of generative AI so far include leveraging it to develop summaries of key documents in fact development, drafting privilege log lines, and answering questions about the facts present in key documents.
How AI accuracy is gauged
Predictive and generative AI, with their distinct goals and outcomes, necessitate different approaches to measuring accuracy.
- Predictive AI’s accuracy is measured by whether it can correctly classify documents according to established criteria.
- Generative AI’s accuracy is measured by how well it addresses the prompt's requirements. This may sometimes lead to creative rather than factual outputs. In other words, whether the response is accurate really depends on the prompt, tuning mechanisms, and validation.
Despite these differences, both predictive and generative AI have use cases that allow outside counsel to measure their accuracy and report those results back to their corporate counsel in a way that shows the tool’s impact on their bottom line and eDiscovery goals.
Showing ROI for predictive AI: AI models outperform privilege keyword searches
Our first example comes from a quick-turn government investigation of a large healthcare company. For this matter, we worked with counsel to train an AI model to identify privilege and ran it in conjunction with privilege search terms.
The privilege terms came back with 250K potentially privileged documents, but the AI model found that more than half of them (145K) were unlikely to be privileged. Attorneys reviewed a sample of the disputed docs and agreed with the AI. That gave counsel the confidence they needed to remove all 145K from privilege review—and save their client significant time and money.
We saw similar results in another fast-paced matter. Search terms identified 90K potentially privileged documents. Outside counsel wanted to reduce that number to save time, and our AI privilege model did just that. Read the full story on AI and privilege review for details.
Accuracy ROI: So, in the above cases, did outside counsel have reliable metrics to show their corporate clients how accurate the AI model was classifying documents? They sure did. AI was quantifiably more accurate than search terms.
Showing ROI for generative AI: AI can draft more accurate priv log lines than people
Let’s move on to the newcomer, generative AI.
We have begun to use generative AI to draft privilege log descriptions. That’s an area where defining accuracy is clear-cut: How well does the log explain why the doc is privileged?
During the pilot phase of our AI priv log work, we partnered with a law firm to answer that very question. With permission from their client, the firm took privilege logs from a real matter and sent the corresponding documents through our AI solution. Counsel then compared the log lines created by our AI model against the original logs from the matter.
They found that the AI log lines were 12% more accurate than those drafted by third-party contract reviewers. They also judged the AI log lines to be more detailed and better written.
Accuracy ROI: Once again, outside counsel walked away with quantifiable metrics they could show their corporate client, establishing that generative AI was more accurate than their traditional approach of using third-party contract reviewers to write initial drafts of privilege log lines.
We have evidence from live matters as well. During one with a massive dataset and urgent timeline, outside counsel used our generative AI to create privilege logs and asked reviewers to QC them. During QC, half the log lines sailed through with zero edits, while the other half were adjusted only slightly. You can see what else AI achieved in the full case study about this matter.
More accurate review equals more efficient review (with less risk) for your corporate clients
Numbers are great—but your clients will also want to know the real-world impact of AI accuracy on their eDiscovery goals and budget before agreeing to use it on their matters. So, what material benefits can you convey to show them that improving accuracy is worth the upfront investment in AI?
Here’s the impact, broken down in terms that your corporate clients will appreciate:
- Better use of attorney and reviewer time. With AI accurately identifying priv and non-priv documents, your firm’s attorneys (and third-party contract attorneys) will spend less time billing your client to review no-brainers and more time on documents that require more legally nuanced analysis. And for risk-averse clients in highly regulated industries who want eyes on every document no matter what, you can still optimize review time (and costs) for them by sending highly unlikely docs to lower-cost contract resources and reserving your higher-priced review for close calls. This benefit should resonate with your own team as well. It frees up valuable time to work on more strategic legal tasks that better demonstrate your value to your clients, while proving that you’re a good partner who is looking after their bottom line.
- Opportunities for culling. You can choose a cutoff at a recall that makes sense for the matter (including even 100%) and automatically remove all documents under that threshold from review and straight into production. This is a crisp, no-fuss way to show your clients that you’re avoiding spending your team’s time and resources (and their money) on documents highly unlikely to be privileged. And once again, for your own team, the less time you’re spending billing on mundane tasks, the more time you can spend on the legal analysis and nuanced review work that clients appreciate.
- Lower risk of inadvertently producing privileged documents. This one is pretty straightforward for both you and your corporate clients. The better your system is for classifying privilege, the less likely you are to let privileged info slip through review.
What does accuracy mean to you?
I hope this helps you guide your corporate clients on their AI journey in eDiscovery by clarifying accuracy metrics and explaining how to effectively demonstrate the benefits of that accuracy. If you’re interested in a deeper dive, explore AI at Lighthouse.
eBook Available Now
Discover Your AI POV
As outside counsel, how important is AI accuracy to you? Would more accurate AI results help you stand out to clients? What other ways would it benefit your legal practice? Download our interactive AI guide that covers accuracy and five other dimensions of AI to help you craft an informed, personal point of view on AI’s potential.