With AI becoming an integral component of most tech platforms, the world is beginning to run the risk of increased exposure to fake images, videos and news headlines that are quickly changing the way we interact online.
The rise of AI has led to the creation of deepfakes, which are synthetic forms of media that have been digitally manipulated to replace one person’s likeness convincingly with that of another.
It is the tech-fueled equivalent of being in two places at once, at times, without the consent of who is being portrayed. Public figures such as President Joe Biden and Taylor Swift have fallen victim to deepfakes portraying false images and inauthentic voice generated content.
Beyond the household names of the world, deepfakes have begun to infiltrate school systems. Just last year, a New Jersey highschooler caught fire for allegedly creating AI pornographic images of female students.
From the rich and famous, down to the ‘everyday Joe’s’ — no one is exempt from falling victim to deep fakes.
At SXSW, this topic was explored in depth by Ben Colman, co-founder and CEO of Reality Defender technology, a startup that can identify deepfakes across all mediums and Laura Plunkett Executive Director of Comcast NBCUniversal LIFT Labs, a startup accelerator program that helps founders and Comcast NBCUniversal test products, develop solutions, and launch partnerships that aim to transform the customer and employee experience.
As a leader in deepfake tech, Colman highlighted the need for increased awareness about how real deepfakes can get.
In this wave of AI technology, many consumers are so excited to test the tech that they forget the risks at hand.
The more we use AI, the more our words, voices and likeness can be used by other AI consumers as the algorithm grows to learn more about us.
There are currently little to no regulations on AI. Last year, the first AI law was passed by the European Commission to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.”
While experts like Coleman applaud the EU for this measure, he also notes that laws are nothing without actionable consequences and methods of detecting law breaking.
This is where platforms like Reality Defender can come in to help.
The startup recently participated in Comcast NBCUniversal LIFT Labs Generative AI Accelerator. The six-week program, according to a company statement allowed Reality Defender to gain connections with executives across Comcast NBCUniversal and Sky. Additional benefits included access to programming on topics such as enterprise sales, the future of generative AI technology and media training.
Since then Reality Defender has been piloting their technology with NBCUniversal to explore solutions for the public.
Keeping it gee, as AI begins to run rampant, it will be imperative for deepfake technology and regulation laws to be passed. Failure to do so runs risk of increased misinformation and possibly, much worse than that.
Miranda Perez (she/her/hers) is a Jersey City, NJ-based journalist who covers the tech industry. Follow her on X and Instagram: @mimithegee.
Edited by NaTyshca Pickett