TL;DR
A technology writer tested multiple AI video tools and argues they consistently produce content that is either directly harmful or erodes trust in visual media. He says these systems are being exploited to spread misinformation and that efforts to debunk viral clips are failing, especially among older audiences.
What happened
Ibrahim Diallo, writing from personal experience, evaluated early and updated versions of OpenAI's Sora along with tools such as Runway ML and Veo to turn a short story into a film. While later releases produced more realistic output, Diallo found generated footage tended toward a recognizable, subtly wrong aesthetic that undermines narrative specificity and believability. He reports that AI-generated clips are already being deployed at scale by bad actors — spammers, scammers and political manipulators — to spread sensational falsehoods and impersonations. These videos commonly circulate through messaging apps and social platforms, where they reach and mislead older adults who then share them in group chats. Diallo describes trying to debunk many such items but says corrective efforts cannot match their pace and reach. He acknowledges niche creative uses like VFX or carefully produced art exist in principle, but argues that, in practice, the medium currently does more harm than good.
Why it matters
- AI video tools are producing content that can be easily weaponized for misinformation and impersonation.
- Misleading clips spread rapidly via social platforms and messaging apps, outpacing attempts to verify or debunk them.
- Vulnerable groups, particularly older adults, are frequently targeted and convinced by fabricated videos.
- Widespread acceptance of synthetic-looking footage risks eroding trust in all visual media.
- Platforms may be altering or processing authentic videos in ways that make real content appear synthetic, complicating verification.
Key facts
- The author tested OpenAI's Sora (including Sora 2), Runway ML, and Veo while trying to produce a short film from sketches and a script.
- Outputs tended to be technically polished but generic, lacking the intention and specificity needed for coherent storytelling.
- AI-generated videos have developed a distinct aesthetic 'fingerprint' that the author describes as a new uncanny valley.
- These tools are being used now to create fabricated clips featuring public figures and sensational claims (health misinformation, false political statements, fabricated events).
- Older adults are a primary audience for many of these fabricated videos, often seeing and resharing them via WhatsApp and group chats.
- Debunking and media literacy interventions described by the author (spotting watermarks, searching for verification) have limited success against rapid spread.
- The author identifies some legitimate applications (VFX, postproduction fixes, carefully crafted art), but believes harmful uses currently dominate.
- A small visual watermark (described as a cloud icon with eyes in the article) can be a telltale sign of Sora-generated content, according to the author.
What to watch next
- The volume of AI-generated misinformation circulating on messaging apps and social platforms and its impact on older demographics.
- Claims that platforms are automatically altering uploaded videos in ways that make authentic footage look synthetic.
- not confirmed in the source
Quick glossary
- AI-generated video: Video content produced or synthesized by machine-learning models rather than captured directly by cameras.
- Uncanny valley: A perceptual phenomenon where almost-realistic synthetic representations provoke unease because of subtle imperfections.
- VFX (visual effects): Digital techniques used in postproduction to alter, enhance, or create imagery that would be difficult or impossible to film practically.
- Watermark: A visible or invisible marker embedded in media to indicate origin, authorship, or synthetic generation.
Reader FAQ
Does the author believe every AI video is harmful?
The author states that every AI video he encounters is harmful, either directly through misinformation and impersonation or indirectly by eroding trust.
Are there useful, legitimate applications for AI video tools?
The article acknowledges potential uses such as VFX, video editing fixes and experimental art, but says harmful outputs currently dominate in practice.
Who is most affected by AI-generated video misinformation?
According to the piece, older adults are frequently targeted and often share fabricated clips in group chats and social networks.
Will search engines start using AI-generated videos as enhanced search results?
not confirmed in the source

All AI Videos Are Harmful No Exception By Ibrahim Diallo Published Dec 9 2025 ~ 7 minutes read Fund this Blog Font Awesome Pro has unique icon styles to fit…
Sources
- All AI Videos Are Harmful (2025)
- AI content supercharges confusion and spreads …
- Moderating Synthetic Content: the Challenge of Generative AI
- How people evaluate AI-generated video accuracy w/ warning?
Related posts
- Why Apple should abandon in-house LLMs and radically rethink Siri
- Amazon brings its Alexa+ AI assistant to the web with Alexa.com
- I Reset My Google Account Privacy Settings for 2026 — Toggles I Switched Off