Understanding AI’s Role in Identity Theft and Scams

Friday morning I sat down with futurist and Ubertrends Academy founder Michael Tchong to unpack the recent impersonation of Secretary of State Marco Rubio using AI-generated voice cloning. What he revealed was far more alarming than any single headline. The scam wasn’t a fluke—it was a signal. We’re now standing at the edge of a new global fraud era, and most people don’t even know it started.

A hacker used synthetic voice technology to impersonate Rubio in what Tchong calls a “proof of concept” for the next wave of hyper-realistic deception. He warns that this is not a distant threat. It’s already here. And it’s scaling faster than regulators, businesses, or the public can keep up. AI isn’t just changing fraud—it’s turning it into an unstoppable, invisible force. These scams don’t just steal money. They erode trust in government, news, family, and even our senses. When reality can be faked that easily, we lose the ability to tell what’s real—and who we can trust.

AI turns identity into a weapon

According to Tchong, voice cloning scams have already surpassed traditional fraud tactics in effectiveness. Public data, combined with AI training tools, lets bad actors mimic anyone in minutes. Your voice, your tone, your words—all can now be duplicated without consent. AI-driven fraud is no longer a threat—it’s an active crisis. The Secretary of State incident proved that. If government leaders can be targeted, anyone can.

Vulnerable communities are being hit hardest

Tchong emphasized that the elderly, women, and non-English speakers face the greatest danger. These populations may be less familiar with the latest technologies and may struggle with language barriers, making it harder to verify unusual calls or requests.. Scammers know this. They exploit it using cloned voices in emergency scenarios to create panic. It’s no longer about broken-English phishing emails. These are sophisticated psychological attacks—customized in real time. The average senior victim now loses over $27,000 per incident. Those numbers are rising fast.

Political deepfakes are already in use

When I asked if politics was still safe, Tchong didn’t hesitate. One case involved a robocall mimicking President Biden, urging voters not to show up on election day. The audio was convincing. The damage was real. These deepfakes are no longer science fiction. They’re tools now used to suppress votes, impersonate leaders, and destabilize democratic institutions. And they’ll only get more common during the next election cycle.

Companies are completely unprepared

Tchong shared that the corporate sector is reacting too slowly. Many companies have no policies in place for detecting or defending against AI-driven fraud. Most still train employees to spot outdated phishing scams. Meanwhile, AI systems can now generate personalized voicemails, texts, and video messages that are nearly impossible to verify. Even financial institutions, he said, are relying on outdated defenses.

The public isn’t ready either

We talked about awareness—and it’s low. According to a 2023 McAfee study, 69% of Americans couldn’t distinguish a cloned voice from a real one. One in four had already been targeted. Still, most people assume scams are obvious. They’re not. That’s why Tchong believes we need immediate digital literacy training. He recommends code words for family members, secondary confirmation on all urgent calls, and widespread public campaigns focused on detection.

Emotional engineering is the scammer’s secret weapon

Tchong explained that today’s AI scammers are no longer relying on technical backdoors—they’re using emotional ones. The voice of a loved one saying, “I’m in trouble” or “Can you send money?” creates immediate urgency. Victims don’t take time to verify because the sound is so real. The emotional manipulation is now engineered by algorithms. Tchong calls it “empathy hijacking”—when AI uses our natural human instincts against us.

He warns that without stronger public training, many people won’t detect these traps. “We’re hardwired to help someone in danger,” he said. “The problem is that danger might be synthetic now.”

The tech gap is growing faster than awareness

When asked whether tools exist to detect deepfakes, Tchong was direct: “Not for the average person.” While high-end forensic tools can analyze inconsistencies, most consumers lack access to that tech—or the time to use it. “This isn’t just about software. It’s about behavior,” he said. He stressed the need for a cultural shift in how we interpret audio and video. Instead of trusting what we hear, we must begin treating all urgent communication as suspect until verified.

The generational gap is also making the problem worse. Younger users may be slightly more skeptical, but seniors often rely on landlines and recognize voices more than caller ID. Tchong emphasized that scammers know how to exploit these behaviors.

The silver lining is human resilience

Although the outlook sounds grim, Tchong remains hopeful. He believes that public awareness can evolve just as quickly as the scams. “Once people understand that this tech exists, they adjust their filters,” he said. “It’s like learning to spot a phishing email. At first, it fooled everyone. Now, we know what to look for.” He thinks we’re at the beginning of that learning curve for AI-driven deception.

Ubertrends Academy is already building this future. The platform offers media literacy training, deepfake detection tips, and scenario-based workshops for families, schools, and businesses. “It’s not about fear—it’s about readiness,” he told me.

Ubertrends Academy tracks the Scam Surge

Tchong created Ubertrends Academy specifically to train people on how exponential technologies are reshaping society. The Scam Surge, as he defines it, is one of the most disruptive forces of our time. It’s not just affecting wallets—it’s damaging our ability to believe. His academy helps professionals, seniors, students, and policymakers recognize the early signs of tech abuse before it spreads.

The takeaway is simple: verify everything

What happened to Marco Rubio was only the beginning. AI will be used to mimic more leaders, more families, more banks. People will get hurt. But we can push back. That starts with education, updated defenses, and a new social contract around truth. We must adapt now—before AI-driven fraud becomes permanent.

About the Author

Editor-at-Large Alan Merritt

Administrator

Alan Merritt is an international journalist and editor with over 12 years of experience across global news, television, and magazine media. Based in Las Vegas, with ties to New York and Paris, he serves as Editor-at-Large at Just Now News, a leading platform recognized for its Unscripted, Unfiltered, Unmissable coverage. In this role, he contributes a wide range of stories spanning human interest, culture, business, technology, and global affairs, bringing depth, clarity, and a global perspective to every piece.


Discover more from JUSTNOWNEWS®

Subscribe to get the latest posts sent to your email.