Dancing with the Stars of eDiscovery:
Case Studies in eDiscovery Powered by Analytics
Legaltech NY 21016 – February 4, 2016
Sponsored by Content Analyst Company
Scheduled panelists included: Ari Kaplan, Jacob Cross, Iram Arras, Michelle Drucker, Drew Lewis, Hunter McMahon, Mike Schubert, Alison Silverstein and Mark G. Walker
Not surprisingly, given that it was Legaltech, the case studies presented by this panel all noted that technology-assisted review (TAR) was critical to both cutting costs and increasing efficiency, but they made a good case, with some very big numbers being thrown around. One project would have spent $1.4M on the linear review of 450,000 documents TAR classified as not relevant and did spend $90,000 reviewing documents that TAR would have excluded. Citing a case involving 2M documents, wherein they were able to make a first production in 10 days and all responsives in 6 weeks, the woman from kCura’s Relativity put it this way: “cost savings and speed are givens now.”
But another point was made with respect to what I consider to be much more important: selectivity. A participants described a project in which documents were reviewed at the rate of 200 – 250 per day per reviewer, which he said was “not bad,” although it involved a lot of “hand to hand combat” with the data and was thus too labor intensive (meaning that the reviewing attorneys overwhelmed the supervising attorneys with questions and close calls on the relevance of minor documents), but that using TAR would not only save the client time and money, but allow the litigation attorneys to find and concentrate on the KEY DOCUMENTS – the only ones that are going to make a difference in the case. Yes, finding the responsives is important, but finding the key docs is critical.
Another participant noted that if you don’t use analytics, the recipient of your production is increasingly likely to, “and will find things you don’t.” A very persuasive example was given in a case thought to involve bribes. As was pointed out, no one writes in an email, “I’m going to bribe the authorities so that we can obtain a monopoly and rig prices,” so a tough search was anticipated. But after someone thought to teach the machine the definition of bribe, the machine came back with a fistful of birthday notices. With how popular birthdays are in office culture, a simple keyword search quite possibly would have culled these all out, but it turned out birthday was the code word for the illegal conduct they were looking for (the comment was made that “we are past the point of throwing search terms against the wall to see what sticks”).
With concept clustering, it is possible to handcraft an example of the documents you want to find in your data, as well as examples of documents you hope you won’t find, feed them to the machine, and have it look for actual documents reflecting your hopes and fears, quicker and more accurately than can be done with linear review. At the very least, clustering can help identify not only the key custodians who should receive high priority, but the people those custodians have been talking to – with or without the knowledge of the company – about the subject matter of the litigation. One of the concluding remarks, and a common theme, was that we are “woefully undermanned with people who understand the process,” and that it is these people who will get the business going forward.
Time to get aboard, if you’re not already.
Richard Neidinger, J.D.