4: The rise of deepfakes

The Psychology of Doomscrolling: in 2025

These hyper-realistic videos and audio clips can look so much like real people it is increasingly hard to separate fact from fiction.

There are multiple ways in which this technology has been exploited. In finance, deepfakes have been used to impersonate reputable industry leaders, advocating for false investment schemes on social media platforms, like Facebook and Instagram. These events have led to calls for more regulation, and for improved collaboration between financial institutions and technology companies to stop AI-enabled fraud.

However, of course, the bogus finance schemes are only one aspect of deepfake technology, as it has a more sinister agenda against public trust. The existence of deepfake media gives rise to skepticism, or a “liar’s dividend” where altogether legitimate content is then considered fake, allowing the wrong-doers to get off scot-free. The erosion of trust from deepfakes that have been witnessed is now extended to media organizations themselves. Studies have shown a dramatic decline in trust in audio-visual media following exposure to deepfakes.

Everything You Need to Know About 5G 2025
Everything You Need to Know About 5G 2025

As a reaction to these encroaching challenges there is some legislation being considered. The No Fakes Act (Bi-Partisan) is proposed legislation that will protect an individual from AI generated deepfakes, voice clones, and hold accountable abusers, while requiring online platforms to remove the original content upon notice.
The rapid rise of deepfakes, which are AI-generated synthetic media that seem hyper-realistic, has led to an enormous crisis of trust in modern media, as it is becoming increasingly hard to separate fact from fiction when media can so easily resemble real people.

This technology has already been used for ill purposes far beyond just making fake porn. For instance, in the finance space, attackers used deepfakes to impersonate leaders of financial institutions to promote fake investment schemes via social media platforms such as Facebook and Instagram. This has led to calls for more oversight from regulators and improved communications between financial institutions and technology firms to help bring perpetrators of AI-based fraud to account.

How I Use Notion to Organize My Blog2025
How I Use Notion to Organize My Blog2025

Deepfakes could push the limits of not just financial fraud, but public trust. There is an “liar’s dividend” for example, where the existence of technology like deepfakes can result in anything legitimate being viewed as illegitimate, as fraudulent actors could claim it is fake. There doesn’t need to be a deepfake of a given individual for the individuals trust level to increase in terms of questioning all those actors – and social media and others made promotion easier – that can’t be a good thing overall. This is especially evident among people or outlets, because studies show that confidence levels diminished significantly after exposure to deepfakes across audio-visual media platforms.
Deepfakes are an existential threat to someone’s reputation and to the future of political systems and democratic processes. During election periods, deepfake videos can be turned into a weapon of misinformation against an opponent, forcing a candidate to counter a deepfake at the expense of addressing real issues, potentially confusing voters, or undermining faith in an opponent or one of the government’s institutions. For example, fabricated videos of politicians saying or doing things they never did could be disseminated very quickly through social media. As a result, the public may be confused and divided (brookings.edu). This risk has made deepfake regulation and cybersecurity a priority for governments and international organizations.

Deepfakes also have implications for journalism. Journalistic credibility today requires news organizations to invest funds in verification tools or train journalists to detect synthetic content that may undermine their organization’s credibility. > 𝚅𝚎𝚍𝚎: The longer it takes for newsrooms to identify and debunk deepfakes after publication or distribution is critical to disallow misinformation spirals that can be difficult to curtail. This concern is prompting fact-checking groups to expand their work in digital forensics that is explicitly centered on the fact-checking of deepfakes, and consult with tech corporations to determine ways to put a premium on transparency and accuracy (poynter.org)

There can be some good news. Some experts contend the presumed deepfake crisis could create an opportunity for innovation in verification technology, and lead to a media audience that is increasingly skeptical and discerning. The need for trusted news sources and improved digital literacy could improve as a result of worrying about deepfakes and misinformation and lead to a more informed public less likely to be manipulated.