You Could Save 97% by Switching to Technology Assisted Review


With the tremendous growth of eDiscovery over the past few years, law firms now have access to vast amounts of data. However, most are still learning how to best analyze all of that data in a manner that improves review rates and reduces costs. Some firms are using techniques such as Keyword Search to predict/identify “Privileged” or “Confidential” documents, but Technology Assisted Review (using software to search and categorize documents that are relevant for the purposes of eDiscovery) remains a buzzword for many firms.

For those just getting started with Technology Assisted Review (TAR), there are two key elements than can help you maximize its value: work with a Subject Matter Expert (SME) who has knowledge of both the project details and TAR and follow an efficient workflow, such as the one visualized below.

Let’s use an example. An attorney at Van Ness Feldman LLP had the challenge of reviewing a collection of 11,881 in a week, by himself, and he was already tied up with other tasks. The attorney served as the SME and with LightSpeed’s TAR workflow and expertise, the SME reviewed 100 documents in the training round and performed 5 rounds of validation at 100 documents per round and that’s it… 97.3% accuracy was achieved during the optional certification round and responsiveness of 11,881 documents was determined in three hours.

Here is how it works:

As explained in the figure above TAR project includes three review phases which are in between TAR Project Set Up and TAR project complete.They are

a) Training: The training set which is a good representation of the overall set is based on Ipro Intelligence search on the pool of documents selected for TAR project. This set finds the documents that best represent all of the concepts present in the TAR Project.

Our SME assessed 200+ documents from this Intelligence search and tagged them with TAR document tags(Responsive, Non Responsive and Do Not Use as Example)

b) Validation: After SME evaluated TAR project’s training set documents, several random-sample sets of remaining documents are batched and have been reviewed by SME in various rounds to verify existing system decisions and add further examples of each category.

Number of rounds in validation depends on the goal of teaching Eclipse TAR. When the human expert(SME in this case) agrees with the decisions made by the Eclipse TAR system regarding document responsiveness, it’s time to move to move on to certification round.

Graph below shows that as documents were validated by SME per each round, TAR accuracy in identifying truly Responsive documents increases.

As documents are validated in each round, TAR accuracy in identifying responsive documents increases.

c) Certification: During this phase, final review, such as a prioritization review, is held for remaining unreviewed documents. When the certification round is complete, TAR completes categorizing remaining documents.

In summary, TAR enabled Van Ness Feldman to complete the review process of 11,881 documents in approximately seven hours. If a contract reviewer had conducted the same review at the average rates of 50 documents per hour and $50 per hour, the review process would have taken 237 hours and cost $11,881 upon completion. That’s a total savings of 230 hours and $11,500 with TAR!

If you’re interested in learning more about how TAR can save you time and money during your next review process, contact us for a complimentary consultation.

Additional Detail for Data Junkies

If you’d like additional insight into how the TAR process learns over time, you’ll be interest in the following chart and definitions:

Table below shows how TAR performed as SME validated 100 documents per each validation round.

Round Responsive- True Non Responsive – True Responsive – False Non Responsive – False Precision Recall F Measure
1 17 62 2 19 0.8947 0.4722 0.6182
2 18 66 4 11 0.8182 0.6207 0.7059
3 28 68 2 2 0.9333 0.9333 0.9333
4 28 65 2 5 0.9333 0.8485 0.8889
5 36 60 3 1 0.9231 0.973 0.9474

Precision, Recall and F Measure are the measurable metrics for determining TAR accuracy.Ipro definitions for these terms are

Precision: A measure of the accuracy of the identification of document responsiveness

Recall: A measure of completeness that is based on the number of documents tagged as Responsive as compared to the total number of truly responsive documents.

F Measure: A method for factoring both precision and recall to indicate how well the TAR process has identified truly Responsive documents.

Categories:
Share: