A Shocking Case in California
On August 27, 2025, ABC7 Los Angeles reported the story of Abigail Ruvalcaba, a South L.A. woman who was allegedly deceived by scammers using AI-generated deepfake videos and cloned voice to impersonate General Hospital star Steve Burton. The impostor first contacted her on Facebook Messenger, then moved to WhatsApp, professing love and a future together. Experts who reviewed a saved clip told ABC7 the video was likely produced with AI—and Burton, after viewing it, said it “sounds like my voice for sure, 100%.”
According to a Los Angeles Police Department incident report cited by ABC7, Ruvalcaba sent at least $81,304 in gift cards, cash, and bitcoin. Under the scammer’s influence, she then sold her Harbor City condo for $350,000—money she was preparing to send when her daughter intervened. The family has filed suit to reverse the home sale, arguing lack of capacity and fraudulent inducement; opposing counsel disputes that claim
Entertainment Weekly and People further note that the family is seeking relief while Ruvalcaba faces bankruptcy and possible eviction—a reminder of how quickly AI-powered romance fraud can strip a victim of savings, housing, and stability.
The Tools of Deception
One saved clip shows the fake “Burton” saying, “Hello, Abigail. I love you so much, darling… my love.” That level of personalization—delivered via synthetic video and voice—is what makes today’s scams so persuasive. Burton told ABC7 that he knows of “hundreds” of instances where people have lost money to impostors using his likeness and warned fans he would never ask for money.
Sidebar: Red Flags of Deepfake Romance Scams
- Unsolicited celebrity contact (especially moving quickly to WhatsApp/Signal).
- Rapid love-bombing and future-planning after minimal interaction.
- Requests for secrecy and to avoid family/friends.
- Payment via gift cards/crypto/wires—hard to claw back.
- Sudden asset moves (selling a home, cashing out retirement).
These patterns match the Ruvalcaba case as reported by ABC7.
Not Just One Case
- San Diego (2024): A woman lost her life savings to a fraudster impersonating Keanu Reeves, with AI-aided content used to build trust—illustrating the same tactics of impersonation, migration to private chats, and monetary requests.
- France (2023–2025): A 53-year-old woman, “Anne,” lost €830,000 after scammers used doctored/AI media to pose as Brad Pitt; the episode prompted warnings from Pitt’s representatives and widespread coverage by major outlets.
The Mental Health Factor
Ruvalcaba’s daughter said her mother lives with bipolar I disorder, a vulnerability exploited by the scammer’s narrative (e.g., claims of property losses in wildfires, promises of a shared “beach house”). AI didn’t create the vulnerability—but it magnified it by making the illusion feel “real.”
Sidebar: What Families Can Do
- Talk early and often about AI scams—normalize asking for a second opinion.
- Watch for financial anomalies: new crypto apps, gift-card purchases, sudden home sales.
- Insist on verification (celebs don’t DM fans for money).
- Report quickly to banks, local police, and IC3.gov; speed matters for any chance of recovery.
(These steps align with patterns and warnings highlighted in the ABC7 report and national coverage.)
Legal and Ethical Challenges
This case raises cutting-edge questions: Can a real-estate transaction be unwound when a seller was deceived through AI impersonation? How should courts weigh capacity and undue influence when synthetic media is used to manufacture a relationship? The pending lawsuit will test those boundaries, and outcomes could shape future remedies for AI-assisted romance fraud.
Why It Matters to Spousal-Fraud.com
“Spousal fraud” no longer requires a spouse—or even a real person. An AI-fabricated partner can foster emotional dependence, trigger asset liquidation, and leave victims in bankruptcy or homelessness, all while staying physically anonymous. Cases like Ruvalcaba’s underscore the need for platform safeguards, public education, and legal modernization to meet AI-era abuse.


The title of this is a bit misleading. I thought it was going to be about someone marrying AI, lol. That is the big concern when people have knowing relationships with AI and then AI ends up scamming them! ughhhhhh crazy world
This is getting ridiculous. Are there no protective measures for using AI?
This lady was just an easy mark
Tired of people being so evil, just for money
More education for vulnerable people. That’s what I got from this piece
Wow, just……wtf
wait until AI starts deciding to scam people all on it’s own…damn we’re screwed!
Pingback: Dating Safely in 2026: Protecting Yourself from Manipulation, Fraud, and Emotional Exploitation - Spousal Abuse & Fraud