Rapid spread of deepfakes and fake content threatens online safety, prompting urgent calls for regulation
A person scrolls through social media on a smartphone, where AI-generated hate content is increasingly prevalent.
Artificial intelligence is fueling a surge in hateful videos and deepfakes on social media platforms, raising alarms among experts who warn of the dangers posed by this rapidly spreading content.
From fabricated political speeches to inflammatory fake videos, AI-generated material is flooding platforms like TikTok and Instagram, often designed to provoke outrage and division.
The Rise of AI-Generated Hate
Deepfakes and Misinformation
Experts, including those from the Canadian Anti-Hate Network, report a sharp increase in AI-generated content that promotes hate, racism, and misinformation.
Tools like Lumen5 and Synthesia enable users to create realistic videos with minimal effort, often depicting public figures in compromising or inflammatory scenarios.
“The ease of creating deepfakes has democratized misinformation,” said Dr. Emily Chen, a digital ethics researcher at TechFuture Institute.
A recent example includes a viral video falsely showing a Canadian politician endorsing extremist views, which garnered millions of views before being flagged.
Scale and Speed
The accessibility of AI tools has accelerated the spread of hate content. Open-source platforms and low-cost software allow anyone to produce convincing videos in minutes, overwhelming content moderation systems.
The Canadian Anti-Hate Network notes that such content often bypasses automated filters due to its sophisticated nature, with one study estimating a 300% rise in AI-generated hate videos on major platforms since 2024.
“The volume is unprecedented,” said Sarah Patel, a network spokesperson.
You may also like:
How to outsmart fake news in your Facebook feed
Pornographic Taylor Swift deepfakes generated by Musk’s Grok AI
Impact on Society
Eroding Trust
AI-generated hate videos erode public trust in media and institutions.
When viewers cannot distinguish real from fake, misinformation spreads unchecked, fueling polarization.
A 2025 MIT study (referenced in recent discussions) suggests over-reliance on AI tools may reduce critical thinking, exacerbating vulnerability to such content.
On platforms like X, users have expressed frustration over the difficulty in spotting deepfakes, with some calling for stronger verification tools.
Vulnerable Communities
Marginalized groups, including racial and religious minorities, are disproportionately targeted by AI-generated hate.
Videos promoting xenophobia or antisemitism, for instance, have surged, often tailored to exploit local tensions.
In Canada, the Anti-Hate Network documented a spike in anti-immigrant deepfakes, which experts link to rising online harassment and real-world hate incidents.
Regulatory Challenges
Outdated Frameworks
Current regulations struggle to keep pace with AI’s rapid evolution.
Canada’s Online Harms Act, still under debate in 2025, aims to address harmful content but lacks specific provisions for AI-generated material.
“We’re playing catch-up,” said Patel. Globally, the EU’s Digital Services Act imposes fines for unchecked hate speech, but enforcement against deepfakes remains inconsistent.
Experts urge governments to collaborate with tech firms to develop AI-specific laws.
Tech Industry Response
Social media platforms are under pressure to act. TikTok and Instagram have bolstered AI-driven moderation tools, but these often lag behind the latest generative AI models.
Google and Meta are investing in watermarking technologies to identify AI-generated content, yet adoption is slow.
On X, users debate the balance between free speech and moderation, with some advocating for blockchain-based verification to ensure authenticity.
The Path Forward
Education and Awareness
Experts emphasize the need for digital literacy to combat AI-generated hate.
Public campaigns, like those run by the Canadian Anti-Hate Network, aim to teach users how to spot deepfakes, such as checking for unnatural facial movements or inconsistent audio.
Schools are also integrating media literacy into curricula to equip younger generations with critical thinking skills, echoing concerns from the MIT study about AI’s cognitive impact.
Technological Solutions
Innovations like AI-powered detection tools and decentralized identity systems are emerging as potential countermeasures.
Startups are developing software to flag deepfakes in real-time, while blockchain solutions, similar to those explored by Bank of America for stablecoin verification, could authenticate video sources.
However, scaling these technologies remains a challenge, as does ensuring they don’t infringe on privacy.
Why It Matters
A Growing Threat
The proliferation of AI-generated hate content threatens democratic discourse and social cohesion, particularly in polarized climates.
As Canada approaches its next federal election, experts warn that deepfakes could manipulate voter perceptions, a concern echoed globally after similar incidents in the U.S. and Europe.
The issue also ties into broader 2025 tech trends, such as the Musk-Altman AI feud, which highlights the ethical dilemmas of unchecked AI development.
Global Implications
The fight against AI-generated hate is a global challenge, requiring cooperation between governments, tech companies, and civil society.
While Canada’s Anti-Hate Network advocates for stronger policies, international efforts like the EU’s AI Act could set a precedent.
Addressing this issue will shape the future of online safety, ensuring platforms remain spaces for connection rather than division.
This article is based on a report by Ahmar Khan, published by Global News on August 13, 2025. Additional context was drawn from posts on X discussing AI-generated content and its societal impacts.














