Smart Law for Intelligent Machines

Artificial intelligence is changing how we live and work — and it’s changing the practice of law. 

“It is being applied so broadly. It is removing decisions from humans and putting them into the software. We don’t understand much of the legal implications around it just yet,” says Henry Greenidge, J.D. ’10, until recently an executive in the autonomous vehicle industry.

We do know some things already. Privacy issues come to the fore in a data-driven world. Liability law changes when robots make decisions. Automation impacts labor law. The legal profession is changing, too, as AI algorithms replace human labor in the vetting of contracts and other labor-intensive tasks.

In order to be effective advocates, tomorrow’s lawyers will need to have some understanding of the way these emerging technologies operate. How do machines make decisions? How does bias creep into the system? “We have an ethical obligation under our rules of professional conduct to be familiar with the tools that we are using,” says James Denvil, J.D. ’12. 

The point is more than merely theoretical. Though still in its infancy, AI already is raising real-world questions that are not covered under existing jurisprudence.

Michele Gilman

AI in Action

UB Venable Professor of Law Michele Gilman has witnessed this phenomenon firsthand. As director of the Saul Ewing Civil Advocacy Clinic, she saw an elderly client’s medical benefits get cut when the state began using an AI algorithm to evaluate eligibility.

“When we got before a judge, the witness for the state could not explain what factors went into the algorithm, how they are weighed, what outcomes the algorithm was programmed to find,” she says.

Greenidge has seen it, too. As a former public affairs director for autonomous-driving company Cruise Automation, he rode through New York in a prototype driverless car and came upon a construction site, where a flagman was directing cars to cross the double-yellow in order to avoid the workers. What’s an AI-driven car to do?

“The cars have to assess that situation and they may have to break the law — to go through a red light, to cross a double yellow line,” Greenidge says. “We as drivers do that all the time, but it’s different when you are asking the car to make those decisions.”

These examples help to demonstrate the breadth and complexity of the legal issues that may arise as new technologies come into play.

Privacy, for example, will be a front-line battleground in the coming years. Data lies at the heart of most emerging technologies, and yet the law is often far from clear on how that data may be collected and utilized.

“We are seeing increasing regulation and legislation around personal data. That’s a trend that will continue,” Gilman says. “Personal data has been the Wild West for a long time, in terms of how it is collected and used by corporations and government, without individuals consenting to the use of that data. Now that tide is starting to shift.”

Consider again the driverless car, in many ways the epitome of the emerging AI vision. It needs sensors and cameras in order to navigate safely.

“Now suppose the car sees a child and uses facial recognition to identify that child as someone on a missing-persons list,” says Denvil, a senior associate in the privacy and cybersecurity group at Hogan Lovells. A human in this situation would call the police, but what should a car do? “Do we want these tools to become government-surveillance devices? Now all those ‘slippery-slope’ arguments you heard in a moral philosophy class are not just theoretical,” he says.

Then there are issues of liability. Fundamentally, who is at fault when AI is in charge?

“If I am in my autonomous vehicle and an accident happens, new issues arise,” Denvil says. “Who is responsible for that at the end of the day, and how do we allocate responsibility when an autonomous system crashes? Is it the owner, the manufacturer, the person who developed the code?”

The very uncertainty inherent in those questions suggests a massive shift may be pending. “One of the ways we manage risk is by allocating liability and having some certainty around that,” Denvil says. Today that certainty is lacking.

While privacy and liability rank high, emerging machine-driven technologies raise a host of other legal concerns that are no less pressing.

Gilman points to the disproportionate impact these tools may have on minority communities, and the consequences for poverty law. “People are being digitally profiled, then sold to marketers and insurers and other industries,” she says. “The consequences of that can be more harsh for low-income people. These profiles are the digital gatekeepers for whether you get a job, whether you get into college, whether you can buy a car. That can be self-reinforcing, trapping people in a digital loop of hardship.”

Criminal law already is seeing the impacts. “There is a lot of algorithmic decision-making around whether someone is likely to reoffend, whether someone should be granted bail,” says UB Law Professor Colin Starger, who is associate director of the Center for the Law of Intellectual Property and Technology (CLIPT). “There are a lot of discussions about whether these algorithms have biases baked in that will just exacerbate inequalities.”

James Denvil

Legal Practice

Even as machine-thinking promises to reshape our understanding of the law, the same digital tools are reshaping legal practice.

In the most immediate sense, AI promises to lessen the grunt work, automating a range of routine tasks and freeing human labor for higher and better pursuits. “A lot of the commodified tasks that lawyers have done — basic contracts, simple wills, anything that is fairly standard — those can be automated,” Denvil says.

But that same evolution comes with a caveat. “I have an ethical obligation to know how this tool is being used,” he continues. “Can unauthorized people steal the information I am processing? If I am using a contract-drafting tool, does it impose biases that I don’t want in my work product?”

While some are eager to see the ways in which automation might improve the workflow in law offices, they nonetheless raise questions about the practical details.

Starger, for instance, looks at computer-assisted lawyering as analogous to computer-aided design, or CAD, in the architectural world: Machines can do some things faster and more efficiently. The details get tricky, though. A lawyer may know something as fact: We have so many days to file something. But there’s judgment implied. What was the true start date?

His point is that not everything in the law can be reduced to a formula.

From a bottom-line point of view, automation in legal practice could raise profound questions for those who make their living doing relatively routine work. There’s a very real possibility that demand for such services may diminish. “The challenge becomes: Where can I as an attorney add value? I need to leverage these automated tools, without driving myself out of business,” Denvil says.

There also are some specific questions swirling around patent law. What happens when a company uses AI to help develop a new process or a piece of software?

“A person didn’t quite ‘invent’ what the AI came up with, but a person came up with the algorithm that the AI used,” says Nick Mattingly, J.D. ’12, a patent attorney at Mattingly & Malur. “Who is the true inventor? Is it the person who came up with the algorithm, or the person who came up with the data to train the algorithm, or the person who applied the algorithm?”

AI already is being leveraged to make patent searches easier and cheaper, but even here, the picture is hardly clear. “In principle, you just put in some keywords and the algorithm spits it out, and because it’s AI it gets better and better every time,” Mattingly says. That’s the theory, at least. In practice, we’re not quite there. “A human is still so much better,” he says. “The technology is not yet meeting the level of what a seasoned patent searcher can do.”

JAMES DENVIL, J.D. ’12

Senior associate, privacy and cybersecurity group, Hogan Lovells

How did you get interested in AI?

“First, I had developed a general interest in artificial intelligence, partly through science fiction. Then, in the late 1990s, I was at the University of Arizona studying for an advanced degree in philosophy. I became interested in the work of one philosopher there who was trying to develop an artificial intelligence in order to develop insights into how humans think. That was my initial introduction.”

What makes it interesting?

“The core of the challenge — culturally, legally, philosophically — is that we are building things that are both extraordinarily familiar and absolutely strange and unknown. Every company has an HR department trying to find the right resources, trying to support people in doing certain tasks and making corrections when things go wrong. How do we import the controls that we’ve developed over how we manage people, and apply them to machines?

“How do I know that this AI is good for the job? How can I help it to do the job better and address it when things go wrong? It’s the same questions we’ve asked about people, just on a larger scale and in a different language.”

Advice for others?

“I would encourage people who want to get involved in the legal issues surrounding AI to read as much as they can on the technological side, as much as they can dig into. Really get familiar with what the technology is doing, and also read all the material from the AI skeptics, the people who have grave concerns about all of this. This is a really unfamiliar landscape, and you need a broad set of knowledge and some critical-thinking tools if you are going to help others to manage the risk.”

Colin Starger

Facing the Future

UB Law is taking steps to ensure the next generation of lawyers is ready to face the complexities of this rapidly changing landscape. Faculty are tackling the tough questions, and adapting course materials to meet the challenges. Earlier this year, for example, Starger launched a new clinic, Legal Data & Design, to train students in how to use data in the practice of law.

“This is an area of the curriculum where we need to provide students with solid grounding. There are a lot of professional opportunities, and it’s going to be a growing area in the private bar. It’s good that UB Law is ahead of the curve,” Gilman says, noting course offerings such as Cyberspace Law and a seminar in Privacy and the Law.

She points to efforts aimed at helping students to establish not just a firm legal foundation, but also a solid technological understanding.

“There are ‘value’ questions that computer programmers shouldn’t be resolving on their own,” she says. “We need attorneys who understand how data sets are cleaned and trained. We are seeing more classes in the law school where students use computer applications and data tools to benefit clients from a legal perspective. And we will see more of that, as the technical and legal sides work together to drive improvements.”

In fact, tomorrow’s lawyers will have not just a professional duty, but also an ethical obligation, to have at least a basic fluency in the emerging technologies.

“Technological literacy is a requirement,” Starger says. “You don’t have to be the first adopter, but you do have to recognize that this is happening, and you have to be involved with people who understand it and can talk about it.”

Ultimately, attorneys need to embrace the emerging technologies as a means to an end. Despite the challenges and potential pitfalls, automation and machine-driven intelligence can help lawyers to deliver a better end result for their clients and for society as a whole.

“Maybe you are passionate about making the tax code better, or you’re passionate about making family law better,” Denvil says. “We’re not just there to manage tasks. We are part of the government in certain ways. As officers of the court it is our job not just to enforce the law, but to guide it and make it better. We’re never going to automate that.” 

Henry Greenidge

HENRY GREENIDGE, J.D. ’10

Fellow-In-Residence at NYU McSilver Institute for Poverty Policy and Research; former regional director for public affairs at Cruise Automation, a division of General Motors

How did you get interested in AI?

“UB Law was instrumental in getting me to the FCC, where we dealt with new technology, talking about broadband and access issues. It was all about the next generation of technology. Then I left the FCC and went to the Department of Transportation, where I also focused on new technology.

“While I was there, the Google self-driving car project brought in one of their cars. It was really impressive to see what they were able to do. It looked like the future to me. When Cruise Automation launched a driverless pilot program in New York, they hired me, and I’ve been involved with this ever since.”

What makes it interesting?

“This will fundamentally change the way we live and work. This is going to change life as we know it. There are so many benefits to the technology, but there are also costs. I want to be at the table as we talk about how this could be difficult for some people.

“We have to talk about what it will do to jobs and the workforce. As a person of color, I am interested in how this impacts people of color, who could be disproportionately impacted by this technology. We have to talk about things like equity and access.”

Advice for others?

“In law school you need to cover privacy, intellectual property, cybersecurity — anything you can cover while in school. Then you have to recognize that this is an ever-evolving field. You’ll never know everything, just because it is changing so rapidly. You have to be able to look at a problem and figure out ways to solve it.

“In many cases, you will be covering new ground. There isn’t a lot of case law to help you with that. But that’s really the essence of being a lawyer.”

Adam Stone is a writer based in Baltimore.

Leave a Reply

Your email address will not be published. Required fields are marked *