Emotionally intelligent algorithms and blockchain technology offer hope, but the solution ultimately lies in a combination of technology, education and regulation.
As we approach the U.S. election, a new and dangerous wave of AI-generated misinformation is sweeping across the digital landscape, raising the stakes higher than ever before.
In an age where information shapes public opinion, a question that reigns supreme is: Can we trust the information shaping our reality?
What is truly insidious about fake news and deepfakes is their exploitation of a key vulnerability in human psychology: people’s emotions. Studies show that when people are emotionally charged — whether positively or negatively — they are more likely to share content without critically evaluating it.
Imagine a viral video of a public figure giving a divisive speech that later turns out to be fake. By the time the truth comes to light, the damage is done — the emotional response has already entrenched divisions, misled the public, or sparked support for a false cause.
The rapid pace of social media consumption exacerbates this issue where the interactive nature of the social media platforms themselves accelerates the speed at which these deepfakes are viewed and shared by users in near real-time.
With deepfakes becoming more sophisticated, these will inevitably become more difficult to detect and control, and falsehoods will continue to spread faster than corrections can be made.
So, what can we do to protect ourselves from the growing threat of deepfakes?
One promising solution lies in emotionally intelligent algorithms — AI systems designed to detect and down-rank manipulative content. These systems would learn to flag content aimed at misleading or emotionally manipulating users before it goes viral. While platforms like Facebook and X are making strides in this direction, the technology still lags behind the rapidly evolving world of deepfakes. What is needed are AI systems that can work in real-time, learning from user engagement patterns and detecting deepfakes as soon as they appear.
Another approach is blockchain technology, which could provide a way to verify the authenticity of videos and images by creating an immutable record of their origins. Platforms could use this technology to ensure that users can trace content back to its source. While still under development, blockchain verification could play a role in distinguishing real from AI-generated deepfakes.
Deepfakes pose a real threat to democratic processes. Emotionally intelligent algorithms and blockchain technology offer hope, but the solution ultimately lies in a combination of technology, education and regulation.
We must all remain vigilant about the content we consume and act now to safeguard our democratic systems.
Rana Ali Adeeb is a doctoral candidate and a 2024-2025 public scholar at Concordia University.