Financial Guardians, LLC

Financial Guardians, LLC

2026 is Coming in HOT! These Top Four Concerns Have Easy Fixes

With the holidays behind us and filing season on the horizon, there are the top four biggest concerns for 2026 - and how we can ward against them.

Jan 22, 2026
∙ Paid

Well, now that the holidays are behind us and we are quickly working through year-end statements and reports as we prepare for the upcoming filing season, now is the time to do a few last minute

Varonis, a leading global cybersecurity operation that specializes in data security and the analytics connected thereof, recently released their report of the top four cybercrimes to expect in 2026 and the list might surprise you.

We are going to outline the top four upcoming areas to watch in 2026, but be sure to stick around for ways to combat these threats before they hit.

One: Hyper-Personalized Social Engineering

With an over 700% increase in phishing attacks, current countermeasures have been improving. However, when AI is coupled with a phishing campaign, it now has the ability to research the individual’s online data, internal data, social media accounts, and any other data store available to build a profile against the target. Now, instead of sending a generic email or text that appears to come from the USPS, the target will receive an e-mail that appears to come from somebody they know talking about their recent trip to Bali and asking for a hotel suggestion.

For years, we have relied upon increased education and basic scanning tools. These no longer are sufficient. In fact, Gemini recently was tricked through a social engineering campaign to leak data about its user to an external system using a calendar invite.

Detection efforts have proved less successful dropping from 4.2 out of 5 to 2.75 out of 5.

Two: Deepfakes Going Deeper

The idea of deepfakes, or AI-generated content (pictures, audio, video, etc), has been around for a while. Even all the way back during 2020, the concerns over deepfakes being generated from online content such as TikTok continued to arise. Now, years later, with the power of AI-platforms in the hands of anyone, the ability to create hyper-realistic ‘copies’ of somebody is a possibility. Heck, I created a deepfake of myself that was super realistic and it cost me less than $4.

With this lowered cost and increased accuracy, the risk of somebody creating a deepfake for malicious intent is greater than ever. This could be a deepfake audio impersonating a client and calling your office. At the moment, there is about an 85% accuracy rate that the deepfake would pull off the identity verification.

This level of technology is not just for infiltrating large firms; here are a few examples that firms of any size may experience:

  • A disgruntled ex-spouse using a deepfake to try and request income documents or contact information to argue support.

  • A child using a deepfake to obtain parental tax information for a scholarship or grant; while not malicious in intent, still a breach concern.

  • A competitor using a deepfake to try and steal a client (or just client data) for multiple purposes.

Three: Overprivileged AI Platforms

More people are deploying AI platforms into their environments; many of these are even opting for higher-tiered plans promising greater security and data separation. Even more recent platform and models are allowing integration directly into internally-managed data sources including databases, drives shares, and more.

Typically, these environments are built and configured by individuals who are ‘trying out’ the tools and do not have a deep understanding of how they truly operate. As such, small holes and data exposure seep in and create larger, growing complications

Unfortunately, with these environments coupled with the rapid speed of development, breach rates continue to climb. One of the largest AI platforms on the market announced six breaches in 2025.

These threats are multi-tiered. The data can be exposed accidentally within the system through poor coding or testing practices, or a malicious actor can breach the system externally. AI platforms are currently targeted at a rate several multiples greater than other cloud platforms.

Four: Abusing AI Agents and LLMs

This threat is a bit different than the others as it is less about what you can do and more about what the ‘bad guys’ can do.

Instead of using pre-built systems with guardrails such as Gemini, Perplexity, or ChatGPT, malicious actors can download and deploy the same AI models locally in their own environments for free. Most of the core AI models are available for free so hackers will download a model and remove the ethical rails, allowing them to use the powerful platforms for nefarious purposes.

For example, a hacker can download an AI platform and deploy it with multiple AI systems to increase their ability to attack from one system at a time to several hundred. This means that the number of (random or targeted) attacks that a business defends against on a daily basis has now drastically increased. This increases their attack surface by several hundred, stretching the resources of a small business to larger limits.

I guess we could say that the AI Revolution has directly hit cybersecurity. With all of these AI-powered threats, it is more important than ever to stay aware and put safeguards in place.

For Guardian Lite (Paid) Members, stick around for actions to take to protect yourself, your firm, and your client data.

User's avatar

Continue reading this post for free, courtesy of Financial Guardians, LLC.

Or purchase a paid subscription.
© 2026 Financial Guardians, LLC · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture