Californian Deepfakes

Opinion By Heidi Boghosian Special to The Sacramento Bee October 24, 2025

California faces a surge in AI deepfake scams. Learn how social engineering exploits trust and practical steps individuals can take to become human firewalls.

The email seemed genuine: a Yale Law School student — and a fellow Armenian from California — was looking for part-time research work. When I asked how he found me, he said the Armenian Bar Association. That checked out, as my email address was listed on their directory. His LinkedIn profile showed Yale but little else. He mentioned a school deadline, so we scheduled a Zoom for later that day.

On screen, he looked real enough: early-20s, polite, slightly formal and ambitious. I agreed to send a list of tasks before confirming. TOP VIDEOS Within an hour, the texts began. “Send the list tonight. My parents just died and I need the money.” Then came the ask: “Could you advance the first week’s pay?” Classic scam sequence: credentials, crisis, urgency.

When I reread his message, the tone felt robotic: short, repetitive and oddly flat. The longer I looked, the less law-school-level it seemed. What nearly fooled me wasn’t a stolen identity, it was a fabricated one — a digital phantom. Deepfakes can now conjure a believable person — face, voice and backstory — from scraps of public data. Generative AI stitches stock clips and synthetic speech into avatars that appear convincingly human on screen.

Across California, scams like this are multiplying, powered by the same technology that nearly duped me. The state now leads the nation in reported cybercrime losses — over $2.5 billion in 2024. In Los Angeles, an AI-cloned voice imitating his son convinced a father to send $25,000 during a supposed emergency. In Southern California, a woman lost more than $80,000 to a fake romantic partner after a deepfake mimicked soap-opera actor Steve Burton. Even major firms are vulnerable: in August 2025, cloud-software giant Workday confirmed a social-engineering breach that exposed customer-support data through a compromised vendor.

The strategy behind all these attacks is persuasion. Scammers impersonate information technology or human resources staff, invent emergencies and pressure people into surrendering access. AI supercharges those tricks: crafting flawless phishing emails, cloned voices and convincing deepfakes. Spoofed messages urge users to “verify your account.” Texts pose as banks or employers. Proofpoint reports that three-quarters of organizations were hit last year, while voice-cloning drove a 442% spike in phone scams, many mimicking family members in distress.

The best defense? It isn’t another firewall — it’s us.

Most breaches begin with social engineering: manipulating trust and fear to make ordinary people act against their own interests. Verizon’s 2025 Data Breach Investigations Report found that 60% of breaches involved the human element.

Technology evolves faster than instinct. California invests billions of dollars in cybersecurity infrastructure, yet the easiest entry point remains a helpful employee clicking the wrong link. Digital safety must become second nature: pause before sharing information, hang up and call back if a “bank” calls, never share login codes and report suspicious messages quickly. Workplaces should treat phishing drills like fire drills.

Policy still lags behind risk. Early drafts of the 2024 Budget Reconciliation Bill included a proposed 10-year freeze on new state AI laws, a move that drew widespread criticism such that the Senate stripped that language before passage. Meanwhile, California has moved forward: Senate Bill 53, the Transparency in Frontier AI Act, was signed into law in September 2025. It requires leading AI developers to publish safety frameworks, report critical incidents and protects whistleblower.

But technology can only go so far. Cybersecurity isn’t just an IT issue, it’s a civic duty. In a state that leads the world in innovation — from Silicon Valley’s labs to UC Berkeley’s data-science classrooms — California’s strength lies not in its machines, but in its people. If residents treat digital vigilance like earthquake preparedness — a habit of community resilience — we can blunt the next wave of AI deception. The first line of defense doesn’t sit in a server rack. It’s each of us.

Every-day Americans — the people who depend on digital systems for work, learning and connection — must become human firewalls, defending the integrity of our personal data, our finances and our shared trust in one another.

Next
Next

Deepfakes in the Diaspora