In the Black Mirror

What Artificial Intelligence Means for Race, Art and the Apocalypse

Introduction

When you hear the words "Artificial Intelligence" or A.I., you might think of the latest "Black Mirror" episode or that re-run of "The Terminator" you watched when you couldn’t find anything better. Popular media does an amazing job creating elaborate storylines about how A.I. will take over the world (and we're not necessarily saying it won't). But the thing is, A.I. is already working all the time, on the very device you’re using this minute. Whenever your phone or computer thinks and performs like a human — when it predicts your tastes, recognizes your face, knows where you’re going — that’s Artificial Intelligence. And it’s everywhere.

To find out how A.I. is changing the world, for better and worse, we talked to four experts: the conceptual Internet artist Sam Lavigne, tech writer Alexis Madrigal, engineer/analyst Deb Raji, and data ethics advocate Rachel Thomas. They all see us becoming increasingly reliant on A.I. systems in ways that impact our autonomy, creativity, society and the very experiences and identities that make us human.

Section 1

Apocalypse

Section 1

Apocalypse

You’ve seen the scenario play out again and again in sci-fi: a brilliant geek builds a robot that can walk, talk, think and even feel like a human. It’s all very exciting until the bot gets smarter than its maker. Dystopia ensues. How close is this fantasy to real life?

We have good news and bad news. The good news is, we were able to ask Sam, Rachel and Deb this question for you. The bad news is, the problem isn’t the bot. It’s us. Find out what we’re getting wrong by worrying about an A.I. apocalypse in our future (hint: time to tune into the present and to take a hard look at ourselves).

Bio
Sam Lavigne is an artist and educator whose work deals with data, surveillance, cops, natural language processing and automation. He has exhibited work at Lincoln Center, SFMOMA, Pioneer Works, DIS, Ars Electronica, The New Museum and the Smithsonian American Art Museum. His work has been covered in the New Yorker, Washington Post, the Guardian, Motherboard, Wired, The Atlantic, Forbes, NPR, the San Francisco Chronicle, the World Almanac, the Ellen Degeneres Show and elsewhere.
I think the apocalypse in some way, has already happened. We're already at the service of automation, and I think the question that always, always, always comes up with automation, is not if automation is good or bad. But who is the person or who is the group of people that are benefitting from automation?
Sam Lavigne
Bio
Rachel Thomas directs the University of San Francisco Center for Applied Data Ethics and co-founded fast.ai, which has been featured in The Economist, MIT Tech Review and Forbes. She was selected by Forbes as one of 20 Incredible Women in A.I., earned her math Ph.D. at Duke, and was an early engineer at Uber. Rachel is a popular writer and keynote speaker. In her TEDx talk, “Artificial Intelligence needs us all,” she shares what scares her about A.I., why it needs to be accessible, and why we need people from all backgrounds creating it.
I think that too many popular movies have done us a disservice in thinking about what the real risks are. And when I talk about A.I., I am not talking about humanoid robots.
Rachel Thomas
Rachel Thomas directs the University of San Francisco Center for Applied Data Ethics and co-founded fast.ai, which has been featured in The Economist, MIT Tech Review and Forbes. She was selected by Forbes as one of 20 Incredible Women in A.I., earned her math Ph.D. at Duke, and was an early engineer at Uber. Rachel is a popular writer and keynote speaker. In her TEDx talk, “Artificial Intelligence needs us all,” she shares what scares her about A.I., why it needs to be accessible, and why we need people from all backgrounds creating it.
And in the United States, we often have this very murky public-corporate partnerships of private corporations selling technology to police or to government entities. One case that really worries me is in Baltimore. During protest of Freddie Gray's death, Baltimore police used facial recognition to identify protesters. And I think that's a very, very chilling application.
Rachel Thomas
Bio
Deb Raji has worked closely with the Algorithmic Justice League initiative, founded by Joy Buolamwini of the MIT Media Lab, on several projects to highlight cases of bias in computer vision. Her first-author work with Joy has been featured in the New York Times, Washington Post, The Verge, VentureBeat, National Post, EnGadget, Toronto Star and won the Best Student Paper Award at the ACM/AAAI Conference for A.I. Ethics & Society.
And there's some, you know, sci fi that's like interesting around A.I. You know, a super sentient, intelligent robot taking over and destroying things. I think that is less likely to happen than us training a model that has a bug or flaw that ends up killing somebody. That's definitely the real world dystopia.
Deb Raji
Deb Raji has worked closely with the Algorithmic Justice League initiative, founded by Joy Buolamwini of the MIT Media Lab, on several projects to highlight cases of bias in computer vision. Her first-author work with Joy has been featured in the New York Times, Washington Post, The Verge, VentureBeat, National Post, EnGadget, Toronto Star and won the Best Student Paper Award at the ACM/AAAI Conference for A.I. Ethics & Society.
Young people are growing up online in this world where they're being fed a lot of information from their friends, from their family, from these websites. And it's A.I. that's controlling what they see.
Deb Raji

ICYMI: Your social feeds are using your data to teach machines to act like humans. Is that cool with you?

Section 2

Art

Whether it’s musical compositions, paintings, poetry or prose, we aren’t too sure how we feel about a machine creating works of art. And if we’re honest, some of us wonder if machine-made art is actually art at all or something completely different. We get that A.I. can be good at suggesting songs and making playlists, but when a machine starts to make the music itself, is the computer crossing a line?

Bio
Alexis Madrigal is a staff writer at The Atlantic. Previously, he’s been the editor-in-chief of Fusion, a staff writer at Wired, and affiliated with the University of California, Berkeley’s Center for the Study of Technology, Science, and Medicine and Harvard's Berkman Klein Center for Internet & Society. He curates the 5it newsletter, which covers emerging technologies and social dynamics through history that suggest the future will be as weird as the present. With Robinson Meyer and Jeff Hammerbacher, in March 2020 Madrigal established the COVID Tracking Project, which quickly became a go-to source for information on coronavirus testing from all 50 states.
There's all these kinds of intelligence: the intelligence in our bodies, intelligence that's expressed when we dance or we play sports, or you have a conversation with somebody and you look at them and you go "that person's probably thinking that." No machine can do that. And yet we humans all do that instantly, easily, perfectly.
Alexis Madrigal
Bio
Sam Lavigne is an artist and educator whose work deals with data, surveillance, cops, natural language processing and automation. He has exhibited work at Lincoln Center, SFMOMA, Pioneer Works, DIS, Ars Electronica, The New Museum and the Smithsonian American Art Museum. His work has been covered in the New Yorker, Washington Post, the Guardian, Motherboard, Wired, The Atlantic, Forbes, NPR, the San Francisco Chronicle, the World Almanac, the Ellen Degeneres Show and elsewhere.
I don't think we'll ever see a truly, beautiful work of art made by a machine. I don't think we'll ever see such a thing as even a neutral or a just machine.
Sam Lavigne
Bio
Deb Raji has worked closely with the Algorithmic Justice League initiative, founded by Joy Buolamwini of the MIT Media Lab, on several projects to highlight cases of bias in computer vision. Her first-author work with Joy has been featured in the New York Times, Washington Post, The Verge, VentureBeat, National Post, EnGadget, Toronto Star and won the Best Student Paper Award at the ACM/AAAI Conference for A.I. Ethics & Society.
We think about music and art and we invent things. And it's beautiful. It's awesome. And I think we have an appreciation for art and literature and poetry. And I think that's a very uniquely human thing.
Deb Raji
Deb Raji has worked closely with the Algorithmic Justice League initiative, founded by Joy Buolamwini of the MIT Media Lab, on several projects to highlight cases of bias in computer vision. Her first-author work with Joy has been featured in the New York Times, Washington Post, The Verge, VentureBeat, National Post, EnGadget, Toronto Star and won the Best Student Paper Award at the ACM/AAAI Conference for A.I. Ethics & Society.
There are these interesting projects where you can compose these songs and the A.I. model will sort of pick the next note so you can compose an entire jazz tune through an A.I. system. And I think that it's just super fun and it comes out with these tunes that are super nonsensical musically. A composer wouldn't think of setting things up that way. But you're like, "Hey, this is kind of a jam. I'm into this." So A.I. generated music is definitely on the horizon.
Deb Raji

Bots are out there composing poetry and writing songs. Do you believe that A.I. can make original works of art?

Section 3

Race + Bias

There are many things Artificial Intelligence is good for: helping doctors make diagnoses, reducing human error (shout-out to autocorrect...sort of), and simplifying everyday tasks with assistants like Siri or Alexa.

But automation comes at a cost, in part because A.I. does not work the same for everyone. Its performance varies based on race, gender and other factors. This doesn’t just mean if you’re not a white person, you could have a hard time using facial recognition to unlock your phone (although, you sure might). Faulty algorithms increasingly deployed by our public systems — like courts, schools, law enforcement — mean A.I. can make devastatingly biased decisions that deepen inequities and harm individuals and communities. Which is in part why it’s so important that the people most affected by the technology are also the ones making it.

Bio
Alexis Madrigal is a staff writer at The Atlantic. Previously, he’s been the editor-in-chief of Fusion, a staff writer at Wired, and affiliated with the University of California, Berkeley’s Center for the Study of Technology, Science, and Medicine and Harvard's Berkman Klein Center for Internet & Society. He curates the 5it newsletter, which covers emerging technologies and social dynamics through history that suggest the future will be as weird as the present. With Robinson Meyer and Jeff Hammerbacher, in March 2020 Madrigal established the COVID Tracking Project, which quickly became a go-to source for information on coronavirus testing from all 50 states.
Non-white people actually don't show up as well in the facial recognition database, which also means that you've developed this technology that is biased in this very obvious way. The computer will literally not know anything that isn't in the dataset that's been presented to it. So that data becomes extremely important.
Alexis Madrigal
Bio
Deb Raji has worked closely with the Algorithmic Justice League initiative, founded by Joy Buolamwini of the MIT Media Lab, on several projects to highlight cases of bias in computer vision. Her first-author work with Joy has been featured in the New York Times, Washington Post, The Verge, VentureBeat, National Post, EnGadget, Toronto Star and won the Best Student Paper Award at the ACM/AAAI Conference for A.I. Ethics & Society.
There was a study released where we evaluated the commercial facial recognition systems that were deployed. And we said, "How well does this system work for different intersectional demographics?" So, how well does it work for darker skinned woman versus lighter skinned woman versus darker skinned men and lighter skinned men? And it figures that there was a 30 percent performance gap between lighter skinned men and darker skinned men, which is insane. For reference, usually you don't deploy a system that's performing at less than 95 percent accuracy.
Deb Raji
Deb Raji has worked closely with the Algorithmic Justice League initiative, founded by Joy Buolamwini of the MIT Media Lab, on several projects to highlight cases of bias in computer vision. Her first-author work with Joy has been featured in the New York Times, Washington Post, The Verge, VentureBeat, National Post, EnGadget, Toronto Star and won the Best Student Paper Award at the ACM/AAAI Conference for A.I. Ethics & Society.
The way that these models perform actually changes based off of who you are, and that's super problematic. You know, if you're a darker skinned person, you're actually more at risk of being misclassified. And for certain products that are used by law enforcement, that are used by immigration, that are used in military situations, it becomes a safety risk to be a darker skinned person because you're less likely to be classified properly.
Deb Raji
Bio
Rachel Thomas directs the University of San Francisco Center for Applied Data Ethics and co-founded fast.ai, which has been featured in The Economist, MIT Tech Review and Forbes. She was selected by Forbes as one of 20 Incredible Women in A.I., earned her math Ph.D. at Duke, and was an early engineer at Uber. Rachel is a popular writer and keynote speaker. In her TEDx talk, “Artificial Intelligence needs us all,” she shares what scares her about A.I., why it needs to be accessible, and why we need people from all backgrounds creating it.
Another example of bias comes from some software that's used in many U.S. courtrooms. It gives people a rating of how likely they are to commit another crime. And it was found that this software has twice as high a false positive rate on black defendants compared to white defendants. So that means it was predicting that people were high risk even though they were not being rearrested. And so this is something that's really impacting people's lives because it was being used in sentencing decisions and bail decisions.
Rachel Thomas
Rachel Thomas directs the University of San Francisco Center for Applied Data Ethics and co-founded fast.ai, which has been featured in The Economist, MIT Tech Review and Forbes. She was selected by Forbes as one of 20 Incredible Women in A.I., earned her math Ph.D. at Duke, and was an early engineer at Uber. Rachel is a popular writer and keynote speaker. In her TEDx talk, “Artificial Intelligence needs us all,” she shares what scares her about A.I., why it needs to be accessible, and why we need people from all backgrounds creating it.
I think all data is biased, and to me, some of the most promising work around this is proposals around different ways to make those biases clearer, to include information about how data was collected — who's included who's not — so that people aren't blindsided by these biases.
Rachel Thomas

What is more racist?

Section 4

In Dialogue

with Deb Raji

To hear more about A.I. ethics, we went deeper into the topic with Deb Raji, whose research with colleagues at M.I.T. revealed major racial and gender bias in facial recognition systems used by tech giants. YR Media’s Ariel Tang wanted to understand from Deb what to do about the data we give away, sometimes even willingly, by using our own devices.

Ariel Tang

So I know I should, but I never read the terms of service when I make an account on an app. As someone who's working on A.I., do you? And how do you decide which products that use A.I. to give your data to?

Deb Raji

Ariel, you need to start reading the terms of the service! I do read the terms of service like a nerd, but it's because I'm more attentive to where I put my data than I was before I got involved in this space. You're not alone in not reading these terms of service agreements. They purposefully make it impossible to read. I think it is important for all of us to start thinking about where our data goes. That being said, even though I do take a glance at the terms of service, I probably click yes just as often as you do. So I end up in the same boat as you probably. I'm just more aware of my own doom.

Ariel

So A.I. gets a bad rap, especially when people make comparisons to “Black Mirror” and shows like that. How do you make sure you stay on the good side of A.I.?

Deb

We're all the bad guys.

Ariel

What kinds of problems is A.I. good at solving? What kinds of problems is A.I. terrible at solving?

Deb

I love talking about this. This is everything that I talk about, to be honest. I am really passionate about the fact that we shouldn't be worried about A.I. or robots taking over humans, mostly because robots are very good at certain things and humans are very good at certain things. You know, we experience rich emotions. We have an emotional intelligence that a machine does not have. We understand it. We can read each other's emotions and detect each other's emotions very easily. But also, our perceptive abilities are unparalleled. And it's incredible. And it's something that's a very uniquely human skill. Computers, though, have much more memory than we do. So if you think about rote memorization, if a computer has to do a biology test where it's just like, turn it in and turn it out, the computation allows for this kind of ability of storing that information and combining it logically in a way that humans might struggle with. So I think computers are very good at these complex calculations. They're very good at rote memorization or these standardized tasks that don't change or don't have a lot of variance, but might be difficult for a human to consistently do well. Think about, you know, music and art. We invent things. And it's beautiful. It's awesome. And I think we have an appreciation for art and literature and poetry. And I think that's a very uniquely human thing.

Section 5

In Dialogue

with Sam Lavigne

As a self-described “conceptual Internet artist,” Sam Lavigne uses technology to create projects that explore topics such as criminal justice, surveillance, commodification and automation to educate people on the dangers of putting public infrastructure into the hands of private tech companies. YR Media reporter Sydney Livingston sat down with him to discuss.

Sydney Livingston

So what do humans do that machine learning and technology could never accomplish?

Sam Lavigne

Well, I think, one of the things that we're really seeing with a lot of this machine learning stuff is that it's only ever able to more or less reproduce the kind of data that you've given. You can't really do new things at the moment and it can only sort of like regurgitate and duplicate what you've told it to do. I don't think we'll ever see a truly, beautiful work of art made by a machine. I don't think we'll ever see such a thing as even a neutral or a just machine. So, I think that's what we have to remember. These are tools and we should use them as tools and that’s what they're good at. We should also think that there's certain spheres of social, political, economic sort of tasks that we should never assign to machines. I would never want bail recommendations, prison sentences or suspicions about where to send the police to be determined by machine. There's just certain things that we should never do. Not because the computer is going to take over, it's because the computer is going to reproduce the biases that you give it. And then it's going to make what effectively is a bad decision.

Sydney

So in most sci-fi movies, A.I. is seen as a horrifying thing that will take over the world. How accurate are these movies?

Sam

I'd like to reframe that question, you know, or reframe that whole scenario, because I think the apocalypse in some way, has already happened. We're already at the service of automation, and I think the question that we need to be asking is not “Is automation good or bad?”, but “Who is the person or who is the group of people that are benefitting from automation?” And the most likely answer is that people who are already at the bottom are going to lose out more. People who are already at the top are going to gain more.

Sydney

What impact has A.I. made on everyday human interactions?

Sam

The way that we interact with other people is being mediated by automated systems. And this is not a metaphor. Imagine you follow like a thousand people on Instagram. Why is it that you see certain things first and certain things from certain people a lot and maybe less from other people? It's because there's an automated system that's trying to figure out what you should look at. It means that the way that we connect with other people might actually have to go through an automated system and that our social relationships might be determined in some sense by these systems. If you're on Tinder or something, why do you see certain people more than others? These are very consequential things in our lives. And they're (in) what we might call a “black box”. There's no way for you to know as a user of those systems why you see certain things where you get matched with certain people, why certain messages come before others. I guess the issue here, among many other issues, is there's no accountability. When I'm using a hammer, I want the hammer to do the thing I expect it to do. I don't want it to stop me mid-hammer and be like, “You should try hammering something else.” When you're using a computer, you want it to be reliable. You want to be able to expect and understand how it responds to you. To me, that's what a good tool is. If anything, I think we should make them as unhuman as possible.

About this Project

A.I. is a shaping force of the 21st century, with the potential to help us conserve energy, treat disease, foster self-awareness and inspire new forms of creativity. It can also exacerbate some of humanity’s worst traits: bias, greed, violence. We are the next generation of A.I. creators and users. We need to understand the technology, embrace its potential, insist on its ethical application, expose its harms and tell its stories.

With support from the National Science Foundation and in partnership with stellar university and industry collaborators including the App Inventor team at M.I.T., we’re spending three years digging into all things A.I., and for each one of those years, we’ll produce a series of dialogues between young people in our newsroom and big players in A.I. from a range of fields. “In the Black Mirror: What Artificial Intelligence Means for Race, Art and the Apocalypse” is the first set of dialogues. If you’re looking to get students in your life thinking critically about and playing with A.I., check out our primer for lots of great activities and learning tools.

We even used A.I. to produce the image at the top: a machine learning platform for creatives called RunwayML. We fed photos of our four interviewees into pre-existing neural network models accessible within the application. By manipulating visual features like hue, saturation, brightness and level of blur, we managed to create 100+ abstract images that we compiled into a stacking GIF inspired by Adam Ferriss’ piece for the New York Times.

Credits
Reporters: Sydney Livingston, Ariel Tang, Zoe Harwood
Editors/Producers: Nimah Gobir, Marjerrie Masicat, Lissa Soep
Designer: Marjerrie Masicat
Developers: Radamés Ajna, Devin Glover
Audio Editors: Galnadjee Joe-Johnson, Jacob Armenta
Researcher: Rainier Harris
Project Lead: Nimah Gobir

Get Fresh Updates