The University of Baltimore School of Law

“Google Eyes and Big Brother Lies: Data, Privacy and Algorithmic Justice” Webinar Recap

Headshots of each panelist for the March 9 webinar.

On March 9, UB Law in Focus held a live webinar entitled “Google Eyes and Big Brother Lies: Data, Privacy and Algorithmic Justice” as part of an ongoing discussion series. The webinar was moderated by the LDDC’s own Professor Starger, and included a panel of experts in technology and law: Professor Michele Gilman of UB Law’s Civil Advocacy Clinic, Mutale Nkonde, CEO of AI for the People and Matthew Stubenberg, Associate Director of Legal Technology at Harvard Law School’s Access to Justice Lab.

A conversation on big data and algorithmic justice couldn’t have happened at a better time, as this has become an even more hot button issue in recent months with social media platforms conducting their “selective bans,” policing reform and issues surrounding access to technology during a global pandemic. Technology is an ever-increasing constant in our daily lives that comes with some pretty big costs. As discussed in depth by the panelists, our digital identities and algorithms are now becoming gatekeepers for where we can live, whether we will be arrested, sentencing, immigration status and more. We heavily rely on technology, and in turn technology watches us. The panel discussed implications of this and delved deeply into how algorithms and technology are not “agnostic” as many people might think and further harm marginalized communities.

A troubling implication of technology in the criminal sphere is that surveillance and other advanced technologies such as biometrics (facial recognition, gait recognition, etc.) are serving as a means to give police more probable cause for arrest. Ms. Nkonde provided an alarming statistic that facial recognition software has a 40% error rate. Misidentifications by facial recognition systems in criminal matters are common, and have very real implications. In a case discussed by Ms. Nkonde, an African American man was misidentified based on an image capture in a New Jersey retail store and was arrested held in jail for 10 days because he couldn’t make bail. Though technology can be a useful tool for exoneration, it is also allowing for a greater criminalization of marginalized communities and is getting them caught in multiple systems.

A very interesting question posed to the panel was whether or not it is easier to change bias in algorithms than in our legal structure, and the ensuing discussion reached the to the heart of technology where open data exists as a very nuanced issue and a double-edged sword. On one hand, algorithms for sentencing can help provide structure and a formula to prevent certain judges from going rogue and handing out overly harsh sentences, however, the other side of the coin is that algorithms are made by humans with inherent bias and are based on historical data from a biased system that is picked up by the algorithm. A couple of the panelists expressed interest in greater government regulation and oversight on data and privacy, while others expressed concern over ill-informed legislators having control over something that they have little understanding of.

Ms. Nkonde expressed the idea that we are now all “technical citizens” and all of the panelists ended on the point that algorithmic justice depends on studying technologies with a close eye on equity to ensure that they are being used to empower people and not further harm marginalized communities. This discussion provided much food for thought, especially related to what algorithmic justice actually looks like and what it means to us as lawyers and “technical citizens.”

Leave a comment

Your email address will not be published. Required fields are marked *

Change font size
Contrast