Back to Expertise

AI and ME2 – Machines and Expertise in Everything

December 05, 2019

ai and data science

Starting in 2014 with the introduction of Watson from IBM, AI has become a significant topic for lawyers. A recent ABA Journal article quoted one lawyer as saying “Once we have fully artificial intelligence enhanced programs like LegalZoom, there will be no need for lawyers, aside from the highly specialized and expensive large law firm variety.”

Given lawyers’ natural skepticism and frequent resistance to new technologies, is this just #PeakHype?

The leading legal technologist and futurist, UK-based Richard Susskind, first worked on AI systems in the late 1980’s and has been writing about AI for the last 30 years. Dan Katz, of Chicago-Kent Law School in Illinois, has recently emerged as the leading legal technologist in the US, and has differentiated between “rules-based AI” and “data-driven AI.” So while the attention to AI may be new, the development has been a long time coming.

So what will AI mean for lawyers when all is said and done?

AI will be a feature. Or as we say at Elevate, ME2, which stands for Machines and Expertise in Everything. Our ME2 (“me too”) initiative is focused on weaving AI into legal work – not to replace human lawyers, but to augment them, as other kinds of software have done for decades.

Just as Scott McNealy from Sun predicted that computing would become a utility two decades ago, and now we have sophisticated web services from companies like Amazon or IBM, AI functionality will become a feature or utility for most aspects of legal work.

For some lawyers or legal service providers, AI will be a feature that enhances their competitive position and professional satisfaction; for other lawyers or legal service providers, AI will be a feature that diminishes their competitive position and professional satisfaction.

To understand why, let’s explore three questions:

 

  • What is AI and how does it relate to lawyers’ work?
  • How do new technologies like AI typically get introduced in established fields?
  • What is a prudent approach for lawyers in thinking about how to adopt AI?

 

1. What is AI and how does it relate to lawyers’ work?

The term Artificial Intelligence was popularized at MIT in the 1950’s and 1960’s to describe how a computer might mimic the functions of the human brain; but as both understanding of human cognition and development of AI have advanced, the definition has changed regularly. As Kurt Keutzer, a former colleague and now Professor of Computer Science at UC Berkeley once said, “it’s only AI when you don’t know how it works; once you know how it works, it’s just software.”

Today, when people refer to AI, they most typically mean Machine Learning (“ML”), or the ability of a computer to look at large amounts of data and see patterns in it, like whether someone with a particular credit profile is more likely to default on credit card debts. The most powerful AI applications are those embedded in large-scale consumer offerings, like Waze for advising on traffic options, Siri and Alexa for voice-recognition, or Amazon for making book recommendations. Per Dan Katz, none of these are based primarily on “rules” that are codified into a form of computer intelligence, but rather are data-driven, by discerning the patterns of previous decisions to predict or advise on future decisions. The most compelling example is self-driving car technology, which is not based on getting driving experts in a room and codifying all the rules of driving (e.g., “if on a three-lane interstate going faster than 72 miles per hour, it is OK to pass in the right lane if the middle lane has six or more visible cars in front of you, providing that there is (i) no merge, (ii) no car in the shoulder or (iii) no large exit less than 2 miles ahead”), but rather based on accumulating hundreds of millions of hours of film of actual driving circumstances and decisions and predicting or advising driving decisions based on those patterns. The machine enables Human X to do what Humans A, B, and C have collectively already done.

For our purposes, we should probably think of AI for lawyers as:

  • Aggregation of large amounts of data of lawyer decision-making or that leads into lawyer decision-making,
  • Where a machine discerns previous patterns of decision-making, and
  • Either makes a recommendation or prediction based on those patterns.

Since discerning patterns is something lawyers have always done, the advent of AI can both strengthen and supplant different kinds of lawyer work. To think about how lawyers are likely to apply AI, it is useful to distinguish between subjective and objective legal work.

Historically law schools suggested, based on the Langdellian method, that all legal reasoning was “objective,” but that notion has long since been discarded. So, in truth, most legal reasoning is subjective, i.e., one person may think Roe v. Wade was correctly decided, and another may think it was wrongly decided. Both can marshal internally consistent legal arguments or cite lots of precedents, but in the end, both views are subjective. So AI can help you construct arguments for or against Roe, or make predictions on whether a particular judge will be more persuaded by one argument or another, but cannot tell you which choice is “correct.”

Conversely, if a due diligence exercise requires me to review 1,200 contracts and determine whether there is a change of control provision that operates in this particular deal, that is an objective exercise, even though some of the language may be ambiguous or arguable, and as I classify different language to mean ‘yea’ or ‘nay’ to the change of control question, the machine can get smarter each time it sees different variants of that language.

2. How do new technologies like AI typically get introduced to established fields?

Until recently, AI-style technologies were often introduced as a substitute for expert reasoners, such as expert-system diagnostic systems in medicine. But Expert Substitute applications have rarely succeeded, because they rarely out-perform established experts initially, and tend to be resisted by experts and misused by non-experts. Expert Substitution is particularly problematic in law because:

  • The role of a lawyer is protected by regulation, including attorney-client privilege;
  • Most legal judgments are subjective, so there is no way to “test” whether the machine judgment was correct; and
  • Clients typically don’t just want an answer, they want the ability to “rely” on the expert judgment in varying respects.

But that doesn’t mean that AI has no place in law, it just means we should look primarily to Expert Augmentation, especially in those areas where lawyers are grappling with large amounts of data, which itself might have been created by or is best understood by machines. Some examples where we see AI getting traction today include:

  • The aforementioned contract analysis or due diligence;
  • Analyzing legal bills; and
  • Various aspects of e-discovery.

So typically the best way to use AI initially will be to supplement or augment an initial large project or repetitive style of work. To go back to our general examples around the introduction of AI, new AI technologies are successful most often when then are embedded in an overall services offering (like Waze, etc.), not when they seek to supplant or displace an expert. In other words, ME2.

3. What is a prudent approach for lawyers in thinking about how to adopt AI?

(Specifically, what should lawyers do about AI over the next year or two?)

As noted above, lawyers tend to skeptical about the introduction of new forms of technology. One important consequence of this skepticism is that lawyers can miss out on the opportunity to learn from, not just to learn about, how to use a new technology. What do we mean by this?

Anytime a new tool or method is introduced into a field, it forces practitioners to evaluate how they do work and how they assess work. Pilots don’t fly a jet plane the same way they flew propeller planes, and air-to-air combat strategies don’t stay constant either. So when introducing AI into their work, lawyers have a valuable opportunity to ask some important, fundamental questions:

  • How is the way we’re doing work aligned with what really matters to the client and other stakeholders?
  • If we use AI, what aspects of the work can the AI improve, and what does it risk making worse? How do we measure that?
  • How can we learn from every stage of our use of a new tool to continue to improve our work?

Once we conclude that AI is a learning opportunity and not a practice risk, then the logical next step is to look for projects where we can introduce AI to augment the work of existing experts and accelerate the learning of non-experts:

  • What types of projects are other people using AI on?
  • What types of projects or matters are most important to our firm or legal department that we should be sure to improve on by using AI?
  • How can we track the progress/evolution of AI to make sure we are learning and improving?
  • What does of projects or matters can we cumulate expertise and competitive advantage by using AI?
  • As AI becomes more generally available (e.g., “smart-searching”), what areas of practice that we have traditionally viewed as requiring more sophisticated lawyers can actually be done by less sophisticated lawyers?

For law firms, their primary impediment to embarking on AI projects may be (i) their segmentation of client data, (ii) difficulty in funding activities that may cost money in the current year but not yield benefits till future years, as well as (iii) having the specialized personnel to run such projects. For legal departments, their primary impediment may be lack of specialized expertise or internal technology support. In both cases, Elevate can augment the resources of the law firm or client and help ensure a successful project, especially in projects to combine the efforts of both firms and clients.

Conclusion

In an upcoming post, we’ll discuss a specific example of using AI and the benefits for all the stakeholders. For now, I’ll just encourage you look for examples of ME2 – Machines and Expertise in Everything – like Siri, Amazon, Waze and others, and start thinking about where ME2 could help the profession.

And don’t worry about AI replacing lawyers – just think about how you can make sure that you’re one of the lawyers for whom AI enhances your competitive position and professional satisfaction.

“Our ME2 (‘me too’) initiative is focused on weaving AI into legal work – not to replace human lawyers, but to augment them, as other kinds of software have done for decades.”

Back to Expertise