Curbing Conspiracies With Artificial Intelligence

Movers & Shakers: Q&A With Ezinne Nwankwo

06.01.22
Curbing Conspiracies With Artificial Intelligence

The most surprising conspiracy I heard about during quarantine – the false belief that rubbing cow poop on yourself will ward off COVID-19 – got me wondering about what technology is available to better understand misinformation, so I interviewed Ezinne Nwankwo. 

Nwankwo is a Ph. D. Candidate in computer science at University of California, Berkeley. She is currently working on a project that uses AI to address the spread of online misinformation about COVID-19. 

We talked to Nwankwo about her work, educational journey, and how she learned more about AI.  

This interview has been edited for clarity and length.

Cyrus Candia: Which life experience helped you better understand artificial intelligence? 

Ezinne Nwankwo: The most helpful experience was this class that I took my senior year of college; it was a class taught by Dr. Latanya Sweeney.

She was the first black woman to graduate from MIT with a Ph. D. in computer science. She's most known for uncovering harm around different automated decision-making tools as well as privacy and data. 

She did this study where she looked up her name on Google's search engine and then looked up the name of one of her colleagues who were white. The results for her name were ads that came up that said things like if she'd been in jail or mug shots and things like that, versus her colleagues, who had no ads came up. She was able to uncover these hidden biases within search engine tools.

CC: I imagine that’s not all you learned at UC Berkeley…

EN: The class “Politics of Personal Data” wasn't directly teaching us about A.I. specifically, but more broadly, about the harms that surround the use of data and automated tools. I felt that class exposed me to a lot of the things that I felt some of my other classes didn't teach me. 

They teach you about the math underneath some of these tools. They don't teach you when humans interact with them and when they're deployed in the real world, how they behave. Do they behave justly? Equitably? And how do we uncover those things?  The class set me on a path to try and learn more, to pursue a Ph. D. in something computational. And hopefully, when I get my Ph. D., I can teach a similar class!

CC: Tell us about the project you are working on now related to misinformation and COVID-19 in Africa. 

We were specifically looking at Nigeria and trying to understand how people were talking about misinformation, COVID-19, and trying to understand if there was a lot of misinformation going around or not and what the dynamics were like. Were people mainly talking about how the government was responding to the pandemic? Or was it the federal government versus the local government, within the towns and villages? 

We wanted to understand what people were talking about, how people were feeling about the pandemic, and what they were dealing with on social media. People tweet a lot and they post a lot of their feelings and thoughts on these platforms. We focused on understanding and decoding what people were saying.

CC: Sounds like a lot of work! How many tweets did you have to go through? 

EN: I can't remember the number now, but it was way over 10,000 tweets! 

I had a group, two other friends, and colleagues. We wanted to work on a project together. It happened that we are all Nigerians. We felt kind of connected to this project, and we were like, “Let’s go for it!” 

CC: How did you and your team structure your COVID-19 misinformation project? 

EN:  We took a subset of the tweets and read all the tweets and tried to label them with a topic they were talking about. We also used more computational tools. There is a model called “Topic Modeling,” and it is an algorithm that's able to determine clustered tweets into specific topics.  

You can do Topic Modeling for 50 tweets. And if it looks like some of them are the same topic, you can merge or lower the number of topics until you get the right amount.  

CC: How should AI and machine learning be used in our society today?

EN: I do believe that there are roles that machine learning and AI can play for the betterment of society, rather than some of the harms that we know of today. I think AI could be used to help understand and diagnose social problems. With the advances that we have in language modeling, computer vision, and machine learning, we can use these tools to characterize some of today's most pressing social issues related to the spread of misinformation on online platforms, racial bias in automated speech recognition and facial recognition, as well as discrimination in policing, healthcare, etc. 

AI alone will not solve any of these social issues but shedding light on them can help point us towards better solutions and potential paths forward.

Support the Next Generation of Content Creators
Invest in the diverse voices that will shape and lead the future of journalism and art.
donate now
Support the Next Generation of Content Creators
Invest in the diverse voices that will shape and lead the future of journalism and art.
donate now