AI and the New Face of Fraud: How to Protect Your Identity and Finances in 2026
Submitted by JMB Financial Managers on January 23rd, 2026
Artificial intelligence (AI) may be the most revolutionary technology of our time, with industries scrambling to embrace its possibilities. AI’s early influence seems similar to the positive disruptions brought about by past innovations such as the personal computer, the internet, and the smartphone.
However, as AI reshapes everything from business to education to healthcare, criminals are also utilizing it in innovative, creative, and often undetectable ways. Last year saw a surge in AI-generated scams, impersonation fraud, and synthetic identity theft that show no signs of abating as bad actors perfect the use of the evolving technology.1 For high-net-worth individuals and families, these risks are especially acute because of their public visibility and complex digital footprint.
Although we are not cybersecurity or AI professionals, we want to help you understand what’s changing, why it matters, and practical steps you can take to help manage threats to yourself and your family. To paraphrase the old car ad, this is not your father’s identity theft!
Executive Summary:
- AI is transforming how fraud happens: Criminals now use artificial intelligence to conduct scams that are more convincing and harder to detect than ever before.1
- Losses continue to rise: Consumers reported more than $12.5 billion in losses to fraud in 2024, a 25 percent increase over 2023.2
- Impersonation is skyrocketing: The Identity Theft Resource Center reports a 148 percent increase in impersonation scams in 2024, overtaking every other form of identity crime.3
- Deepfakes and voice cloning are spreading fast: Global deepfake fraud rose 700 percent in Q1 2025, and synthetic identity document fraud jumped 378 percent.4
- Real-world cases prove the danger: Scammers have cloned voices from three seconds of audio, fooling victims into sending thousands of dollars.5
- Affluent families face heightened exposure: Public visibility and large digital footprints make them preferred targets for AI-driven phishing, ransomware, and social-engineering attacks.6
- Awareness and action matter: Using family passwords, verifying urgent requests, limiting social media exposure, enabling multifactor authentication, and engaging cybersecurity professionals can help you manage risk.
Why Is AI-Driven Fraud Increasing So Quickly?
Artificial intelligence (AI) has become both an accelerator for innovation and a new weapon for criminals. The technology’s ability to create text, audio, images, and video that appear authentic has changed the nature of fraud. According to the Identity Theft Resource Center’s 2025 Trends in Identity Report, impersonation scams surged 148 percent last year, the largest increase on record.3
AI allows criminals to mass-produce realistic content in seconds. Using generative tools, they can create fake websites, write persuasive messages, or mimic voices and faces. Automation eliminates many of the human errors that once betrayed scams. Law-enforcement agencies acknowledge that AI now enables global fraud networks to operate with professional-grade quality and speed.1
Traditional identity crimes such as credit-card theft or account takeovers have expanded into housing, healthcare, education, and digital services.¹ Any activity that relies on verifying identity is now more vulnerable. Fraudsters utilize AI to create documents that can deceive even the most advanced verification systems. Globally, deepfake fraud increased by 700 percent in Q1 2025 compared to the same period a year earlier, and synthetic identity document fraud rose by 378 percent.7
For more on how to increase your financial security, read here.
How Does AI Fraud Actually Work?
AI tools are inexpensive, accessible, and constantly improving. They cut the cost of deception while increasing scale and believability.7 Criminals use them to impersonate people, companies, and institutions across every communication channel.
AI-Generated Text: Scammers use generative text tools to create credible emails, social media posts, and chat messages. Language models correct spelling and grammar, eliminating the linguistic errors that once warned victims. Fraudulent websites now include AI-powered chatbots that guide users toward malicious links.7
AI-Generated Images: Fraudsters create realistic headshots and identification documents that support false identities. Generative-image tools produce photos of people who do not exist but look authentic enough to bypass casual verification.
AI-Generated Audio: Voice-cloning software can replicate a person’s voice using just a few seconds of audio.5 Criminals impersonate relatives, executives, or public figures to elicit payments or sensitive data. The emotional realism of these calls makes them especially effective.
AI-Generated Video: Deepfake video tools create lifelike clips of public officials, business leaders, or family members. Some scammers even simulate real-time video calls to add legitimacy to fraudulent schemes.
These applications have made it nearly impossible for the average person to distinguish between authentic and fabricated digital interactions.5
For more information on how to protect your identity read here.
What Are Some Real-World Examples of AI Scams?
A striking example appeared in The Wall Street Journal in April 2025.5 A Colorado woman received a panicked call from someone who sounded exactly like her adult daughter. The caller claimed she had been abducted and demanded $2,000. Convinced by the perfect voice match, the mother wired the money. Only later did she discover her daughter had been home all along. Professionals later confirmed the voice had been cloned from short online clips.
Security researchers estimate that as little as three seconds of recorded audio is sufficient to produce an 85 percent accurate clone.5 This level of realism demonstrates how emotional manipulation can override logic when a victim believes they are hearing a loved one in distress.
How Large Is the Broader Fraud Problem?
The Federal Trade Commission (FTC) reported that consumers lost more than $12.5 billion to fraud in 2024, a 25 percent increase over 2023.2
- Investment scams caused $5.7 billion in losses, up 24 percent.2
- Imposter scams accounted for $2.95 billion in losses.2
- Government-imposter scams rose to $789 million.2
- The FTC received reports from 2.6 million consumers, making imposter scams the most common category overall. Online shopping fraud ranked second, followed by scams related to business opportunities and investments.2
The FTC notes that tactics evolve constantly, and AI is now accelerating that change.2
Who Is Most at Risk from AI-Enabled Fraud?
Wealthier families face a disproportionate risk due to their visibility and access to liquid assets.6 Cybercriminals often harvest publicly available information to craft highly personalized attacks. They may:
- Create synthetic identities that mirror a victim’s professional or family background.
- Launch phishing campaigns targeting private bankers or assistants.
- Use ransomware to hold digital assets or confidential data hostage.
- Deploy social-engineering techniques to trick victims into voluntary transfers.⁷
High-net-worth individuals are more likely to pay extortion demands, making them prime targets. Criminals also exploit the extensive online footprints of affluent families, using images, posts, and voice clips to generate convincing deepfakes.6
If you own a business, you have even more to be looking out for and to protect. To learn about how to protect small business’ from scammers read more here.
What Can Individuals and Families Do to Protect Themselves?
Concern about AI-driven fraud is widespread, with more than 83 percent of respondents in a 2025 survey stating that they fear AI-powered scams.8 Fortunately, there are simple defensive steps you can consider.
Create a Family Code Word: Establish a shared word or phrase to use in emergencies or for financial requests. Using it during calls or messages confirms identity and helps counter voice-cloning schemes.5
Be Observant: Look closely for signs of manipulation in photos or videos, such as blurred edges, inconsistent lighting, or unnatural movement. In phone calls, pay attention to pacing and word choice; cloned voices often repeat phrases or pause unnaturally.5
Limit Personal Exposure: Restrict social media visibility, manage the sharing of personal information, and try to avoid posting recognizable backgrounds. Managing available data limits what criminals can replicate.5
Verify Urgent Requests: Hang up and call back using verified contact numbers before sending money or personal details. Treat any request for immediate payment with caution.5
Be Cautious of Urgency: Scammers exploit fear and pressure. Take time to confirm facts before acting, even when a request seems emotional or dire.9
Trust Your Instincts: If a video call or message feels wrong, test it. Ask the person to perform a spontaneous action, such as waving or turning on a light. Glitches or delays can expose deepfakes.9
Use Multifactor Authentication: Strengthen key accounts with an additional verification step. Choose long passphrases instead of short passwords, and consider a password manager tool.10
Stay Secure While Traveling: Public Wi-Fi networks in airports or hotels are common hacking points. Back up your data before traveling, use a virtual private network (VPN), and avoid accessing financial accounts on public or shared networks.10
Engage Professional Cybersecurity Support: Some affluent families hire specialists to monitor digital exposure, conduct penetration testing, and train household members.10
Credit Reports: Consider requesting a credit freeze from all three major bureaus—Equifax, Experian, and TransUnion, which will block unauthorized new accounts. The service does not affect credit scores.11
Why Does Awareness Remain the Most Powerful Defense?
AI has altered the mechanics of fraud, but the fundamentals of prevention remain unchanged. Awareness, skepticism, and careful verification are still the best defenses. Discussing family security protocols, reviewing online exposure, and practicing restraint when sharing information can help you manage what technology alone cannot stop.
Even the most sophisticated fraud relies on an emotional reaction. Slowing down, verifying identity, and following basic security routines can help neutralize many of these schemes before they succeed.
We’re Here to Help
While we are not cybersecurity professionals, we help guide clients to resources that can help them integrate digital safety practices into their broader financial strategies. If you would like to review strategies to manage your family’s finances and privacy in an age of AI-driven scams, we are here to help.
--
About the Author
Jack Brkich III, is the president and founder of JMB Financial Managers. A CERTIFIED FINANCIAL PLANNER TM, Jack is a trusted advisor and resource for business owners, individuals, and families. His advice about wealth creation and preservation techniques have appeared in publications including The Los Angeles Times, NASDAQ, Investopedia, and The Wall Street Journal. To learn more visit https://www.jmbfinmgrs.com/.
Connect with Jack on LinkedIn or follow him on Twitter.
Sources:
1. Biometric Update, July 14, 2025
https://www.biometricupdate.com/202507/impersonation-scams-surge-as-ai-f...
2. Federal Trade Commission, March 10, 2025
https://www.ftc.gov/news-events/news/press-releases/2025/03/new-ftc-data...
3. CBS News, June 25, 2025
https://www.cbs8.com/article/news/local/impersonation-scams-surge-148-pe...
4. Forbes, October 17, 2025
https://www.forbes.com/councils/forbestechcouncil/2025/10/17/restoring-t...
5. Wall Street Journal, April 5, 2025
https://www.wsj.com/tech/personal-tech/the-panicked-voice-on-the-phone-s...
Website screenshot: The Panicked Voice on the Phone Sounded Like Her Daughter. It Wasn’t.
6. Financial Times, March 22, 2024
https://www.ft.com/content/169179ed-cc1f-467c-be1c-6668781604d6?utm_sour...
7. Federal Bureau of Investigation, December 3, 2024
https://www.ic3.gov/PSA/2024/PSA241203
8. Abrigo, June 24, 2025
https://www.businesswire.com/news/home/20250624085614/en/83-of-Americans...
9. Forbes, December 16, 2024
https://www.forbes.com/sites/frankmckenna/2024/12/16/5-ai-scams-set-to-s...
10. RBC, October 2025
https://www.rbcwealthmanagement.com/en-us/insights/how-high-net-worth-in...
11. U.S. News & World Report, May 4, 2024
https://www.usnews.com/360-reviews/privacy/identity-theft-protection/10-...
JMB Financial Managers Mid-Year Review for 2025
Click the button below to download a pdf of insights and predictions for the rest of the year.
