Artificial Intelligence (AI) has increasingly blurred the line between truth and fiction.
For example, deepfake technology is the creation of a video, audio, or image via machine learning that draws inspiration from a real person, as the U.S. Government Accountability Officer mentions. Examples of this include a fake Kamala Harris presidential campaign video that Elon Musk shared in July 2024 with his millions of followers on X. The footage included Harris labeling herself as a “DEI hire” and sharing negative remarks about President Joe Biden.
“I, Kamala Harris, am your Democrat candidate for president because Joe Biden finally exposed his senility at the debate,” she said in the altered clip.
The video amassed 131 million views, according to NBC News. Notably, it violated Musk’s own policies for X—acquired by him in 2022—which state, “You may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.”
Deepfake technologies have also been exploited to deceive individuals into believing their loved ones or trusted parties have been harmed. Realistic phone calls are often fabricated to create threats and extract information or money.
More recently, it was announced artificial intelligence (AI) generated images had been used to spread misinformation surrounding the California wildfires, which have led to the destruction of at least 40,000 acres of land, including in Palisades and Eaton, leading many without homes as over 12,300 structures are now extinct. An image of the Hollywood sign engulfed in flames on social media and firefighters using women’s handbags to extinguish the fires had been shared. It was confirmed that the image had been altered using AI, as AFROTECH™ previously reported.
These incidents underscore the growing concerns around the use of deepfake technology, which became a point of discussion at CES 2025 in Las Vegas, NV, during a panel titled “Fighting Deepfakes, Disinformation, and Misinformation” on Thursday, Jan. 9. Experts delved into the implications of disinformation technology.
“Even a year ago, it wasn’t necessarily possible to make deeply photorealistic, video realistic, and audio-realistic fakes,” said Senior Director of Content Authenticity, Adobe Parsons. “Now you can do that for free. So one of the even a couple of years ago, in my field, we talked about the democratization of deepfake tools and AI. And I think we’re beyond democratization. These things are free. They’re open-source models. They’re trivial to use on plug and paste or just one of the changes. Right here at CES, you see commodity devices that are tiny, inexpensive, low power that are capable of running massive models with billions of parameters. That’s another reason we should be concerned and have increasing concern.”
Cautionary Measures
What steps can be taken to mitigate the negative implications of deepfake technology misuse? According to German Lancioni, chief AI scientist at McAfee, the general consensus among panelists involves provenance-based models, which are described as systems for trustability and tracking the history of media modifications.
“That benefits good actors who are trying to define and declare, ‘Hey, I’m using GenAI for good purposes.’ And that automatically removes a big chunk of the universe. Then again, bad actors and cybercriminals are not going to play by the same game rules, and they will find ways of removing or stripping out that embedded information,” Lancioni explained.
“Therefore, you no longer can tell if that was created by AI or if it’s real. That’s where detection comes into the picture by providing the means to detect those artifacts that are kind of hidden and impossible to tell from the human eye. And, at the end, the benefit is for the consumer who will be able to tell, ‘Okay, I’m dealing with GenAI content because I have provenance information or because at least I have detection technology as a fallback mechanism.'”