Sora 2 launch sparks deepfake fears: should you set a secret family password to stay safe?

Show summary Hide summary

Viral videos made with OpenAI’s new text-to-video tool have pushed people online to rethink how they verify identity. A TikTok creator went viral after urging friends and family to adopt private passwords to blunt emerging deepfake scams.

OpenAI Sora and the rise of convincing deepfake clips

OpenAI’s Sora is a text-to-video generator currently available only by invite. Early users have already produced striking clips that spread quickly on social platforms.

Some creations are clearly tongue-in-cheek, like historical figures performing modern gestures. Others mimic influencers with unsettling realism. As access widens, platforms may see fewer telltale artifacts in the footage.

Security experts warn that realistic video could be used to trick relatives, business partners, or employees into sharing money or sensitive data.

Why a TikToker says a private password might help

One TikTok post, viewed hundreds of thousands of times, proposed a simple defense: share a secret word or phrase only close contacts know.

The idea is low-tech but practical. If someone receives a suspicious video or message from a loved one, they can ask for the prearranged password before taking action.

That extra step could make it harder for scammers to exploit realistic but fake videos.

How people responded online

  • Many commenters voiced alarm about how quickly AI tools can produce believable content.
  • IT and security professionals urged proactive measures to protect vulnerable targets.
  • Some users reported already using similar verification systems in business settings.
  • Others recommended turning a password into a short story to make it memorable and harder to guess.
  • A number of viewers worried the technology might already be misused in ways we don’t know about yet.

Practical steps to reduce the risk of deepfake scams

Beyond a shared password, there are several simple habits families and organizations can adopt.

  1. Create a verification routine. Agree on a secret phrase or brief story and use it for any unexpected money requests.
  2. Use out-of-band checks. Call on a known number or use a video call to confirm identity before acting.
  3. Limit sensitive requests to secure channels. Avoid sending account details through casual messaging apps.
  4. Educate relatives. Explain how AI can mimic voices and faces, and practice verification drills.
  5. Keep records. If in doubt, ask for a personal gesture or a detail only the real person would know.

What companies and leaders are doing

Some executives already require internal verification steps for financial transfers. Security teams recommend strict policies wherever money moves.

Tools that detect manipulated media are improving, but detection is a cat-and-mouse game. In the meantime, human-centered safeguards remain crucial.

Balancing innovation and risk as AI video spreads

Text-to-video advances open creative doors while creating new trust challenges. Platforms, creators, and users all share responsibility for preventing misuse.

Simple social protocols like private passwords, paired with technical safeguards, can buy time as detection improves.

They won €205 million in the lottery—but a single detail means they’ll never see a cent
This dog’s emotional reunion with his favorite cow melts hearts online

Give your feedback

Be the first to rate this post
or leave a detailed review



chronik.fr is an independent media. Support us by adding us to your Google News favorites:

Post a comment

Publish a comment