Erase Your Face

by YR Media's Interactive Team with Stanford d.school

October 20, 2020

In the grand scheme of frustrations related to the pandemic, this one's small: you hold your phone up so Face ID can unlock it. You sit there for a few seconds before realizing, oh right, you’ve got a mask on! Your phone doesn't recognize you.

The reason? For facial recognition to work, it has to be able to detect what makes your face unique. The technology turns that information into numbers. Yep, hate to break it to you, but each of our lovely faces can be represented through something like a massive spreadsheet filled with digits. Meanwhile, long before you’re sitting there trying in vain to get past your phone’s pesky password screen, tech companies have already converted millions and millions of other faces into number-sets to create a humongous database for comparison. If your mask blocks enough of your distinctive facial information, even the best algorithms in the world can’t figure out who you are.

How Facial Recognition Works

DRAG FOR MORE
Camera detects a face in an image or video.
A photo is captured and analyzed using distinguishable landmarks, like the outside of your eyes or the corner of your lips.
Using these landmarks, the system then aligns the photo to generate a “forward-facing” image of the person.
This “forward-facing” image is converted into a numerical representation of the face.
The numerical representation is then compared against a database of other numerical representations extracted from many other faces until it finds the closest thing to a match.

WISH I WERE HERE

Here's the thing, though. Some of us are only now realizing the role that face-tracking apps play in our everyday lives because our features are suddenly hidden behind masks all the time. The state of being "unseen" is hardly new for communities that algorithms consistently fail to recognize. The miracle of unlocking digital and sometimes physical doors is operated by algorithms that exhibit bias and inaccuracies relative to race and gender.

Facial recognition takes various forms ranging from seemingly harmless to dystopically disastrous. Some say technology just automates processes that are already in place, but there’s mounting evidence that recognition systems can heighten existing biases and perpetuate systemic racism.

Here at YR Media, we’ve been looking into creative ways to dodge and trick facial detection and surveillance for some time now. Using projectors, dramatic make-up and all sorts of other hacks, activists have paved the way with anti-recognition techniques that also make a broader statement about technology in our data-driven world. And if you’re looking to just post your selfies in peace, University of Chicago computer engineers are developing a tool to imperceptibly “cloak” online images from facial recognition systems.

Try It Yourself

We wanted to give you a chance to try defying recognition yourself. Using your cursor or finger, select a color and draw over the provided image. Employing a service called Amazon Rekognition, your drawing will be tested against a sea of other A.I.-generated faces to see if the software can still make a match, despite your attempt to go incognito. Find out how much you need to cover up to hide from those nosy algorithms:

Draw Over Me
OPACITY
RESULTS

▮▮▮▮▮▮

▮▮▮▮▮▮

▮▮▮▮▮▮

▮▮▮▮▮▮

HOW IT WORKS

In most cases, Amazon recommends setting its software to only ID photos that yield an 80% match, so that’s where we set our cut-off too. If you see multiple matching faces in your results, check out the percentage listed. That’s a “similarity score,” and it shows just how confident the software is that each face is a match. “For many law enforcement use cases, we recommend using a high threshold value of 99% or above to reduce accidental misidentification,” states Amazon Rekognition’s developer guide. And as of June 2020, the company announced a one-year moratorium on police use of their facial recognition technology (more on that below). While these recommendations seem to acknowledge that the technology is fallible, guidelines aren’t consistently enforced. Also, keep in mind that when facial recognition technology is implemented at a massive scale — say, if an entire country uses it across their whole population — even tiny changes in confidence can mean the system could very well be identifying the wrong person.

You may have noticed we’ve given you pre-uploaded photos to see how much info facial recognition technologies need to work well. That’s because if our goal is to get you thinking critically about facial recognition, the last thing we want is to force you to upload your likeness to get the point!

YOU ARE BEING WATCHED

The reality is that these technologies aren’t neutral and neither are the people or institutions that make and use them. Cities allowing facial recognition have been using surveillance systems to monitor and track suspects using mugshots and driver’s license photos. In fact, an investigation by the Center on Privacy & Technology at Georgetown Law showed that half of American adults are in facial recognition databases that can be accessed by law enforcement agencies. And with multiple studies showing that facial recognition is more likely to misidentify darker-skinned people and only a handful of cities banning use of public surveillance programs, we run the risk of making serious mistakes that disproportionately harm our Black populace.

Rollbacks from tech giants further emphasize the need for accountability and responsibility when using facial recognition technologies. In the wake of Black Lives Matter protests following George Floyd’s murder, even before Amazon announced its moratorium, IBM declared that it would “sunset” facial recognition and analysis products. “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values,” said IBM CEO Arvind Krishna in an open letter to Congress.

We asked Dr. Simone Browne, professor at University of Texas, Austin and author of Dark Matters: On the Surveillance of Blackness, a few of our lingering questions about the state of facial recognition today.

Photo: Dr. Simone Browne

What does your work primarily focus on?

I am a researcher, an educator and writer. My research is mainly on surveillance. I approach looking at surveillance, which is quite interdisciplinary, from a Black studies perspective. I try to draw some connections between the past and how it can allow us to ask some important questions about our present. And those important questions could be: What are ways that people resist these technologies? What ways were these technologies, ways of identifying people, but also attempting to make people into objects that could be controlled in a particular way? Do we want to be included in these technologies that are often used in very repressive ways? A lot of research and development is still under a frame of scientific racism. Those long histories are not behind us. They still inform the social conditions and logics of how these technologies are framed. There are all of these ways that it might be designed to have a certain type of body in mind and that body is often a light body. It's often a male body. It makes certain bodies legible and other parts and pieces of bodies illegible. Gender or race are taken as key ways of identifying people, upholding the idea that gender is binary, which is not the case. Or that race can be quantified. It doesn't recognize many Black people. It recognizes white men at a higher degree.

What makes facial recognition better than the other security technologies we use?

I guess the question is what do we need to be secure from? Who is the “we”? Who is being policed in particular ways? Who is being jailed or incarcerated? And will facial recognition change that when it's tied to policing? Or will it allow police then to say, “The algorithm was the one that did this, not me”? There are all of these ways of knowing who somebody is, regardless of their face, whether or not it's masked up. What happens when we identify the bag that you have, the shoes that you have, your tattoos? All of these things are part and parcel of policing. Think of all the data that you're putting about yourself from your LinkedIn profile to your Etsy purchases to your Yelp comments to your Facebook. All of that is a parcel of who you are digitally in the world. There are still ways in which we are captured by the things we do that might be beyond facial recognition technology, but other types of biometrics that are tied in with the digital data footprint we leave in the online world.

Who gains from using facial recognition technologies?

Through popular culture, people develop a certain type of consent to these technologies because they seem to make our lives a bit easier. And for some people, there’s the idea that they're always under threat, always in need of protection and that surveillance technology will save them. That's an easy tradeoff to say, “Sure, I'm going to have this Ring doorbell because I want to be nosy, but also I'm okay with the police having access to it at all times or Amazon selling that data.”

We’re starting to see companies like IBM and Amazon put limits around how facial recognition technologies are being used. What do you make of this?

I think those are important steps. We have to understand the past of this company to be a bit skeptical of their pauses. And for me, it's like, “OK, they're pausing now. What are they working on?” Amazon and IBM are not going to stop being capitalists. They are not just going to stop making money. I think this pause points to a lot of work that's being done by activists, by artists, by researchers, by educators, by people that have been really pushing back at the municipal level to abolish these technologies. So, while there is this pause, the pressure has to stay on.

Credits

"Erase Your Face" was produced by YR Media's Interactive team in collaboration with the Stanford d.school's K12 Lab.

  • Creators:

    Xion Abiodun, Valeria Araujo, Victoria Balla, Zoe Harwood, Dante Ruberto, Bayani Salgado, Ariel Tang

  • Reporters:

    Noah Villarreal, Ifalola Amin-McCoy

  • Producer:

    Nimah Gobir

  • Designer:

    Marjerrie Masicat

  • Developer:

    Radamés Ajna

  • Fellow:

    Devin Glover

  • Editors:

    Lissa Soep, Renato Russo, Ariam Mogos

  • Special Thanks:

    Kyle McDonald, Lo Bénichou

YR Media has joined forces with the App Inventor team at M.I.T. to make stories, apps and learning resources about A.I. through an equity lens. Stay tuned for more. We are grateful for support in this work from the National Science Foundation. The opinions, findings, and conclusions or recommendations expressed are those of the makers of Erase Your Face and do not necessarily reflect the views of the NSF.