#CaseoftheWeekCase Law

Episode 111: Another Court Answers the Question-Can a Party use TAR after Applying Search Terms?

In Episode 111, our CEO, Kelly Twigger discusses a recent decision addressing the ongoing argument as to whether TAR can be applied after the parties have agreed to search terms to identify a pool of potentially responsive documents


Introduction

Welcome to this week’s episode of our Case of the Week series brought to you by eDiscovery Assistant in partnership with ACEDS. eDiscovery Assistant is a SaaS based legal research platform and knowledge center for lawyers and legal professionals, and it is the only legal research database devoted exclusively to eDiscovery case law.

My name is Kelly Twigger. I am the CEO and founder at eDiscovery Assistant, as well as the principal at ESI Attorneys. Each week, I pick a case from our eDiscovery Assistant database and talk about the practical implications of that decision for you, your clients, and your practice.

Before we dive in, if you haven’t yet had a chance to grab our 2022 Case Law Report, download a copy of that for your perusal. Each of the decisions in eDiscovery Assistant is a public link, meaning that you can link to those decisions in your writing. You can also review the full text of the decision without having a subscription to log in.

Special note: My analysis on Case of the Week is limited to the four corners of the decisions from the court and the information that’s provided in those decisions. That’s intentional on my part. It’s intentional because the goal of our Case of the Week series is to identify how these decisions inform lawyers and legal professionals of what they need to do to prevail on electronic discovery issues. If a court does not include facts that were provided to it in filings, then counsel need to figure out how to present them more effectively to get the court’s attention and to make sure the court understands the importance of those facts on a particular issue. I welcome any of the attorneys who are involved in a case that we discuss to weigh in and advise me or our audience of what else is happening in the case that influences a ruling that we otherwise can’t know about. Obviously, Case of the week is Monday Morning Quarterbacking. Hopefully, it’s done with an analysis that’s thoughtful and thought provoking for our audience.

Background

This week’s decision comes to us from Garner v. Amazon.com, Inc. It’s a decision from May 19, 2023, and it is from United States District Judge Robert Lasnik. Judge Lasnik is very prolific when it comes to eDiscovery decisions. We have 59 decisions from Judge Lasnik in our database. This decision, however, is the only one on technology assisted review. In order to be able to provide appropriate context for the decision today, I want to take you back a little bit to a decision from earlier in the case.

Plaintiffs in this case alleged that the defendant’s Alexa enabled devices were sold to consumers using unfair or deceptive trade practices in advertising and that those devices illegally record conversations in violation of state law. The plaintiffs are seeking statutory liquidated and other damages that are in excess of $1 billion.A very large case from a value perspective.

In 2022, the parties negotiated search terms to be applied to six custodians at Amazon. And according to the court’s October 31, 2022 decision, the plaintiffs came to the court seeking an order that Amazon be required to run 38 search term strings across the custodians’ data. The iterations before the court on that motion had included at least five sets of revisions by the plaintiffs, through negotiations with defense counsel, and those revisions led to large reductions in the number of search term hits for the proposed search terms.

Nevertheless, the defendant still objected to the search terms as over broad, disproportional to the needs of the case, and likely to impose an undue burden on Amazon. In the October decision, the Court looked at two of the search strings from the 38 that were proposed by plaintiffs. In looking at those, the terms seemed very broad. The court even noted that in reviewing one of the strings that, “if you select the broadest term from each list that is joined by an ‘and’ in the first instance that would be user and understand and interact, you will undoubtedly return some documents that are not relevant to plaintiff’s claims.”

Any discovery professional thinking about Amazon’s data knows that with no limiters that refer directly to Alexa or to the proximity of those search terms, that result set from “user and understand and interact” is going to be very large and contain significant numbers of non-responsive documents. But the court notes that in its decision in October that defendants did not identify “any discrete alterations in the proposed search terms that would ensure that all responsive documents would be relevant and such precision cannot reasonably expected using the blunt tool of a Boolean search.” As such, in its ruling, the Court ordered that Amazon run all 38 search term strings that the plaintiffs proposed on not just the six custodians that had been identified, but a total of 38 custodians at Amazon.

Amazon then ran those search terms and two weeks later informed the plaintiffs that it would apply TAR to review the results of the search term hits. That is the groundwork for the facts that are before us on the decision we’re looking at today.

Facts

Plaintiffs objected and refused to discuss the use of TAR at all, saying that it was too late to decide to use TAR. They brought this motion to the Court to prohibit Amazon from using TAR to review the search hits and instead ask the Court to require Amazon to conduct a manual review.

This is the fourth or fifth time we’ve covered this issue on Case of the Week as to whether or not it is appropriate to use TAR after a set of documents has been defined using search terms. The Court here heard oral argument on the motion, which is important because presentation of data related metrics is always easier to do in person. It also helps to be able to take questions from the Court and help the Court really understand and be educated on the issues at hand.

Following the use of TAR, Amazon produced only 2,564 responsive documents out of an initial universe of 2,036,172 documents. That’s a responsiveness rate of 0.13% — less than 0.2% of responsiveness rate — on more than 2 million documents. That’s really why the plaintiffs are upset here. The Court’s analysis starts with essentially the two arguments that plaintiffs raise. First, the timeliness issue — that it’s too late for Amazon to raise the issue of using TAR because the parties already agreed on search terms and the plaintiffs want a manual review of the documents. The second argument is that there’s such a low responsiveness rate here that the plaintiffs believe there must be some sort of flaw in the TAR process that was utilized by Amazon.

Analysis

The Court starts its analysis by looking at the Model ESI agreement in the Western District of Washington. The court notes the following regarding that document: “the Model ESI Agreement in this district clearly contemplates using TAR to filter, not just locate, documents. And the ESI Agreement, entered into in April 2022, simply directs the parties to confer to attempt to reach agreement on appropriate computer or technology aided methodologies before any such effort is undertaken. The Court finds that the use of search terms is not, standing alone, a bar to using technology to further refine the production.” So TAR is endorsed by the court. As to the timeliness objection, the Court found that Amazon’s raising of the use of TAR once the universe of documents was identified was not unreasonable, and that the plaintiff’s failure to discuss it per the ESI Protocol was inexcusable. The Court overruled the timeliness objection.

The Court then looked at the low responsiveness rate of the production following TAR, which is, as I noted, extremely low. The Court walks through what Amazon provides as its process, and notes that initially, Amazon’s initial recall that it proposed was 75%, a figure that the court found was too low and is generally a little bit low from a recall perspective. As the process moved forward, Amazon continued reviewing well past the 75% recall rate and ultimately conducted a manual review of more than 1.8 million documents, that’s out of 2,036,000 documents. So now we’ve eclipsed pretty much the purpose of using TAR altogether.

Reviewers also sampled 1527 of the unreviewed remaining 224,900 documents and found no responsive documents. Between the 1.8 million that had already been reviewed, in which they found roughly 2,500 documents that were responsive. The Court came to this conclusion: “because humans reviewed the vast majority of the universe of documents, and the statistical estimate of responsive documents remaining in the unreviewed documents is 0%, the estimated recall rate approaches 100%. There is no reason to suspect that the low percentage of production is, as plaintiffs argue, the result of Amazon’s use of TAR versus human review.” With that, the Court denied the motion and required Amazon to produce those 2500 responsive documents.

Takeaways

Most of us who’ve used TAR effectively in review know that a very low responsiveness rate generally means that the initial data set was over broad. For those of you who are not familiar with TAR, it’s very important to understand that recall is a measure of what percentage of the responsive documents in a data set have been classified correctly by the TAR algorithm. It’s one of the measurements that’s used to validate the results of TAR. When recall is 100%, the algorithm has correctly identified all of the responsive documents in a collection.

The Court initially balked at Amazon’s proposed recall rate of 75% as too low but that really became a non-issue as the Court reviewed the actual process that Amazon undertook, which put recall closer to 100%. And that’s the real takeaway here. You have to clearly demonstrate the facts and numbers in your review process. Someone needs to keep track of what is done and how to provide the data to the court to be able to make your argument or to opposing counsel if you can avoid motion practice. Simple assertions in the October decision of the over breadth of the search terms was not sufficient for Amazon to carry the day there. But if Amazon had done some review of the proposed search term hits and showed the Court the responsiveness rate of those documents, they may have had more success in being able to provide limitations on the plaintiff’s proposed search terms. That’s one way to have demonstrated the over breadth. Instead, Amazon had to review all the documents to provide that information, which is really a costly alternative.

Get into the data early. When you’re looking at proposed search term strings, particularly in a class action, run them against your data and be able to provide some iterative alternatives. We’re only looking on Case of the Week at what’s in the four corners of the decision, but in that October decision, the Court said that Amazon didn’t provide any proposed iterations that would have narrowed the search term set.

Search terms are hard. There’s no question that search terms are very difficult, and when plaintiffs are proposing search terms to defendants with no knowledge of the data, they’re absolutely going to be over broad. Here it looks like that was definitely the case.

What is interesting in this case is that there is no discussion of precision in the TAR process. Precision is a measure of how often an algorithm accurately predicts a document to be responsive. In other words, what percentage of the produced documents are actually responsive. A low precision score tells us that there were many documents produced that were not actually responsive, which is potentially an indication of over delivery. Precision was not discussed at all here. 2,500 documents out of a 2 million document set is probably one of the lowest numbers I’ve ever seen.

As we’ve discussed multiple times, your ESI Protocol and the language you put in it is going to control. If you agree in your protocol to discuss the use of TAR, you have to discuss it. There was nothing in this protocol, at least in the language that’s cited by the Court, that says that TAR could not be used following the application of search terms that the parties agreed upon. The plaintiffs refused to discuss it here, despite the plain language of the ESI Protocol, and the Court didn’t like it.

As Judge Lasnick points out, the Western District of Washington has already endorsed the use of TAR in its model Order. Know that and be prepared for it. If you’re an eDiscovery Assistant user, you know that we have an entire rules database of all the federal district court and state rules across the country, and they’re easily accessible. The Western District of Washington model ESI order is included in there. You’ve got those at your fingertips if you want to be able to stay on top of what’s happening in a jurisdiction.

In addition to being knowledgeable and prepared for the TAR discussion, have the discussion and figure out how it can be used effectively. In many instances, using TAR is beneficial to both parties. It allows you to get documents faster, which allows you to be able to leverage them and use them, and prepare for depositions more effectively, or be able to follow up on those documents and see what additional requests for production you need to propound. There are not disadvantages to using TAR if there is transparency in the process. And the Court here does discuss, by citing some case law, that transparency in the TAR process is very critical and that could have been achieved here.

Not all courts are ordering TAR, but the distinctions in the case law really depend upon the perspective of the judge or the jurisdiction in which you are in front of, as well as the language of your ESI protocol, so be prepared to deal with TAR. Refusing to cooperate will never put you in good light with a judge on any ESI issue, but particularly with TAR, as is demonstrated here.

Overall, it really doesn’t sound like Amazon was able to capture savings from using TAR since it ended up doing a manual review of 89% of the documents anyway. That’s a consideration, and one that we need to handle more effectively in these decisions in front of the courts. Was it really necessary for Amazon to do that? I think if there could have been additional evidence from Amazon as to the responsiveness of those search terms — on the argument back in October about search terms — that might have provided additional authority to the court to allow Amazon to rely on its TAR analysis as opposed to having to do so much manual review.

Finally, if you’re following our conversations about Case of the Week on LinkedIn, Dave Lewis did a great statistical analysis breaking down why plaintiffs’ claim here didn’t really ring true. It’s interesting.

Conclusion

That’s our Case of the Week for this week. Thank you so much for joining me. We’re back again next week with another decision from our eDiscovery Assistant database. As always, if you have a suggestion for a case to be covered on Case of the Week, drop me a line. If you’d like to receive the Case of the Week delivered directly to your inbox via our weekly newsletter, you can sign up on our blog. If you’re interested in doing a free trial of our case law and resource database, you can sign up to get started.

Thanks so much, and have a great week.



Categories
Archives
Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound