#CaseoftheWeekCase Law

Episode 112: Is Elusion Testing a Good Way to Test the Viability of Search Terms?

In Episode 112, our CEO, Kelly Twigger discusses how the Special Master, Philip J. Favro, encouraged the use of elusion testing following the application of search terms as a method to understanding whether the search terms selected captured relevant, responsive documents.


Introduction

Welcome to this week’s episode of our Case of the Week series brought to you by eDiscovery Assistant in partnership with ACEDS. eDiscovery Assistant is a SaaS based platform and knowledge center that helps lawyers and legal professionals leverage the power of ESI as evidence. It is the only legal research database devoted exclusively to eDiscovery case law.

My name is Kelly Twigger. I am the CEO and founder at eDiscovery Assistant, as well as the principal at ESI Attorneys. Each week, I choose a recent decision in eDiscovery case law from our eDiscovery Assistant database and talk to you about the practical implications, what it means for you, your practice, your clients, and how to do discovery of ESI better.

The bulk of our learning in eDiscovery comes from case law. Unlike any other substantive area in the law, the constantly evolving landscape of technology means that the trial courts at both the federal and state levels are regularly issuing new opinions on parties’ obligations around ESI. Our goal here today is to look at the recent decision that we’re talking about, and to really delve into how you can leverage what you’re learning from that decision every day for your clients.

Before we dive in, if you haven’t yet had a chance to grab our 2022 Case Law Report, download a copy of that for your perusal. Each of the decisions in eDiscovery Assistant is a public link, meaning that you can link to those decisions in your writing. You can also review the full text of the decision without having a subscription to log in.

Background

This week’s decision comes to us from the Deal Genius, LLC v. O2Cool, LLC case. This is a decision that is one of eight or nine in our database, and we have previously covered other decisions in this case from the District Judge. The decision here is from July 14, 2023 from Special Master Philip Favro. Many of you will know Phil, he is a very prominent consultant and Special Master in our space and was actually part of a panel that I moderated at the Georgetown eDiscovery Conference last fall.

As always, we tag each of the decisions in our eDiscovery Assistant database with issues. This week’s issues include sampling, special master, proportionality, and search terms.

Facts

The underlying litigation here involves disputes over the validity of patents as well as infringement. And as we’ve seen in multiple decisions on the Case of the Week, IP cases are often fraught with discovery issues because they are very factually complex.

Both parties in this case make fans that hang around one’s neck on a lanyard. It’s the design of the fan as it hangs on the lanyard that is at issue here. As I mentioned, this is the second decision that we’ve covered in this case and the eighth decision in the database on this matter. Back in Episode 61, we covered United States District Judge Jeffrey Cole’s decision very early in discovery, in which the parties could not agree on the search terms to be run against email. That was from March 2022 when discovery was set to close in June of 2022. We are now in July, a year later. The crux of this decision from Special Master Favro is whether and what additional relevant documents should be produced from the null set after Deal Genius concludes elusion testing following its production.

The evolution of discovery disputes here involves concerns from O2Cool that Deal Genius has failed to make complete email productions from its five designated email custodians in response to O2Cool’s email search queries, of which there were five. Those disputes led to the appointment of a special master, and this is at least the fifth decision or order from Special Master Favro since October 2022.

The Special Master worked with the parties to develop an order, likely what we would call an ESI protocol — although it’s not comprehensive, but just specific to this production — that was entered on January 30th, 2023 and required Deal Genius to redo its production of documents in response to O2Cool’s search queries two through five. Under the terms of that January 30th, 2023 order, Deal Genius was required to conduct elusion testing after making its production of relevant emails identified in response to those search queries by reviewing a sample set of documents from the null set. That is the subset of documents that were either coded non-responsive or that did not hit on search queries two through five.

The order also directed Deal Genius to disclose the number of documents in the null set, the null set sample size, and the precise number of documents produced from the null set sample. Any relevant documents that were identified after that sampling during the elusion testing should be produced to O2Cool within ten days. Once Deal Genius completed its elusion testing obligations, the parties then had seven days to meet and confer in order to determine whether Deal Genius should run any additional search queries to identify other relevant documents that the search terms might not have identified.

Effectively, O2Cool needed to request additional search terms that may have arisen in the documents that were produced by Deal Genius from the null set. This is a supplemental way to view whether the search terms hit on what was relevant within the collected information. It’s a very novel approach to the situation and a terrific one, I might add.

On March 3, 2023, Deal Genius produced 54 documents to O2Cool in response to the January 30th, 2023 order. The only issue that’s pending at this time before the Special Master arose because of the elusion testing that Deal Genius conducted to ensure that the relevant documents contained in the null set were not left unproduced. That’s the sole issue pending before the Special Master right now.

Deal Genius produced two documents after the elusion testing, then identified that the null set contained 590,706 documents, its null set sample contained 2,397 documents, and it had identified only two additional responsive documents from the null set sample that it produced to O2Cool. The parties were then supposed to meet and confer within 10 days or by March 20, 2023.

Pay attention to those numbers. That is an incredibly low production rate initially on the 54 documents from the volume that was collected. The elusion rate, as we’re going to see from Special Master Favro’s calculations, shows that it is also remarkably low as well.

The parties did not meet and confer by March 20th. Instead, at the next conference with the Special Master on April 11th, the Special Master asked if the parties had conferred on this issue and told them to do it soon. O2Cool then requested a new search term, given that the earlier related search term had revealed only 28 responsive documents, and O2Cool argued that it was valid given the two documents that were produced from the null set. Deal Genius objected to running the additional search term.

Following that April 11 conference, the Special Master ordered Deal Genius to run the search term and provide the total number of hits, the unique hits, and the overall size of the null set with O2Cool and the Special Master. So essentially, the Special Master said you need to go ahead and run that search term and we’re going to see what the results are and then I’ll determine whether production of that information is relevant and proportional.

The second modified search term revealed 50 hit documents — so almost as many as the entire production of search terms two through five — eighteen of which were unique documents, and that the size of its null set had now grown to 662,502 documents. Deal Genius refused to review the 18 hits from the second search term and that is the current dispute before us.

Analysis

Let’s turn to Special Master Favro’s analysis of the situation following those facts. The Special Master acknowledges that the biggest issue with search terms is that they are almost always under inclusive. It is virtually impossible, without testing the data, to choose an exact set of words linked to by wildcards or expanders that will identify all responsive documents. We talked about this on previous episodes of the Case of the Week, just our episode two weeks ago, in which we discussed that the iterative process of coming up with search terms really depends on the party with the data reviewing the data and understanding what terms are going to provide the most responsive documents. Here, the Special Master notes:

To ameliorate the impact of under inclusive search results and other limitations with search terms, parties should adopt iterative search term development procedures and quality assurance practices that can help identify relevant information not retrieved by the original search terms. One of the most important practices that a producing party can implement at the conclusion of the process is elusion testing.

What is elusion testing? Most often used after TAR, elusion testing looks at a sample of the documents that did not hit on the search terms, as well as the documents that are marked non-responsive from the search term hits. Together, those documents are called the null set. The producing party then comes up with a sample size of the null set documents, reviews those documents, determines whether there are any relevant documents, and to the extent there are, produces those relevant documents to the requesting party. The next step is for the receiving party to analyze those documents in terms of the language and determine whether there are additional requests that it wants to make to the producing party for additional search terms. The results of the elusion testing can be evaluated statistically by calculating the elusion rate, and that’s calculated by dividing the number of relevant documents identified in the null set sample against the null set sample size.

As the Special Master notes here — and he goes through an excellent analysis of how to conduct elusion testing for search term results:

If the elusion rate falls below a certain benchmark on which the parties have agreed (for example, 2%), the parties may decide to forego any further searches for relevant information in the null set. However, where the null set is really large and the percentage of relevant documents in the overall population is very low to begin with, the parties may consider running additional search terms against the null set despite a low elusion rate.

What the Special Master is saying is we’ve got a total set of some 662,000 documents, only about 100 of which are relevant, and when we run the elusion testing, we’re still finding relevant documents. That suggests that maybe we need to revisit the search terms and figure out whether they are in fact the best to find the relevant documents here. According to the Special Master, “this approach to validation of a search and retrieval process can better safeguard against the possibility that discoverable information remains concealed in the null set due to the sheer volume of data collected.” I agree.

The Special Master then looks as to whether the requested terms and production of the documents that are at issue here are relevant and proportional to the needs of the case as required by Rule 26. Past the elusion testing, at this point, although it raises some questions for us, the Special Master wants to look at if these 18 documents are something that should be reviewed and produced.

The Special Master finds that relevance is clear and there’s no real dispute between the parties about that. As to proportionality, the Special Master notes that the parties did not argue, at all, that the six proportionality factors of Rule 26 applied, but that it is “readily apparent” that the discovery sought is proportional to the needs of the case and that Deal Genius has not offered any evidence as to why production is unduly burdensome to review those 18 documents.

Finally, the Special Master notes that the very low elusion rate of 0.08 % that we discussed earlier from that 662,000 documents from the initial test set, argues further for the production of the new hits and that the parties should “consider running additional search terms to identify relevant information that may have eluded search queries two through five”.

Takeaways

What are our takeaways from this excellent decision from Special Master Favro?

What I love about this decision, and the Special Master’s approach, is that he really forced the parties to dig into the analysis of the data, and to understand, using numbers, whether the search terms selected are providing the relevant documents for the case. That’s one of our standard takeaways from the Case of the Week. Get in the data and understand what it’s telling you. Here, Special Master Favro really looked at the numbers to be able to evaluate whether, in fact, the search terms that the parties had chosen were hitting on relevant documents. Based on the number of documents that were in the null set, which means the non-responsive documents and also those that did not hit on the search terms, it’s a very large number, and it really suggests that the search terms need to be revisited.

Using elusion testing to validate search terms is an excellent way to resolve some of the ambiguity around whether search terms chosen cover the responsive documents. But in reality, if the initial search terms are run and the responsive hits miss the mark, even elusion testing is going to fail to help the parties fix the issue. That’s evidenced here in this case with the very low elusion rate. It’s one tool to assist with the process, and Special Master Favro did an excellent job here of understanding the limitations of search terms and trying to work through that issue with the parties utilizing elusion testing.

With the numbers that Deal Genius produced, either O2Cool has no case or the search terms it chose were not great. I’m not sure that elusion testing was going to help all that much. In all, the parties here are fighting about reviewing and producing a very small number of documents, and it really begs the question as to why they’re fighting over this. Either there’s something else underlying this issue or the hostility between the parties is considerable. It’s counsel’s job to understand that the cost of fighting is far more than just providing the discovery and to advise the client as such. We cannot know all of the underlying facts here because we stay within the four corners of this decision, so just keep that in mind. Don’t lose the forest for the trees. Getting caught up in an argument that costs far more than reviewing and producing 18 documents, especially in light of the ease with which the Special Master found the responsive hits were both relevant and proportional, doesn’t seem to make a lot of sense here.

Conclusion

That’s our Case of the Week for this week. Thank you so much for joining me. We’re back again next week with another decision from our eDiscovery Assistant database. As always, if you have a suggestion for a case to be covered on Case of the Week, drop me a line. If you’d like to receive the Case of the Week delivered directly to your inbox via our weekly newsletter, you can sign up on our blog. If you’re interested in doing a free trial of our case law and resource database, you can sign up to get started.

Thanks so much, and have a great week.



Categories
Archives
Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound