When asked to define itself, artificial intelligence had a ready answer: “the simulation of human intelligence in machines that are programmed to think, learn, and make decisions.”
That definition, courtesy of Microsoft Copilot, is both impressive and unsettling, because it perfectly captures the double-edged nature of AI. The same technology driving innovation, automating workflows, and accelerating business growth is also being weaponized to deceive, impersonate, and manipulate.
AI isn’t just reshaping how organizations operate. It’s rewriting the rules of cybersecurity, giving threat actors the power to think faster, adapt smarter, and strike harder than ever before.
Fooled by a fake
In early 2024, a finance employee in Hong Kong was tricked into wiring $25 million after joining a video call with what appeared to be their company’s CFO, later revealed to be an AI-generated deepfake.

The ultimate social engineer.

AI is quickly becoming the scammer’s new best friend. By training on massive amounts of text, images, audio, and video, it can now mimic people with uncanny accuracy. Voices, writing styles, even facial expressions. That realism has completely changed the game for social engineering.
Gone are the days when the telltale sign of a phishing email is improper grammar. AI-driven phishing messages are polished, personal, and often indistinguishable from the real thing. In fact, AI-generated spear-phishing emails now fool more than 50% of recipients, compared to about 12% for traditional phishing attempts.
Even more chilling is the ability to create deepfake audio and/or video of people. What used to require hours of video or audio now takes less than 10 minutes of voice data to clone someone convincingly.
The next line of defense isn’t just technology, it’s awareness. Teams need to slow down, verify through secondary channels, and learn how to question what they see and hear. Because when it comes to AI, even the most convincing message might not be coming from who you think.
Using AI without losing control.
AI is changing how we work, from drafting emails to analyzing data, these tools can boost productivity. But without proper oversight, your organization can be exposed to unnecessary risk.
Hallucinations: AI can sound confident even when it’s wrong. Always verify outputs before relying on them to make business decisions.
Copyright & Content Ownership: Some AI tools may generate text or visuals pulled from copyrighted sources. If you reuse that content publicly, you could face legal exposure.
Data Privacy: Anything entered into a public AI platform can potentially be stored or reused to train future models. That means proprietary data, client information, or health records should never be entered into free AI tools.
Stay protected.
Use only approved AI tools vetted by your organization, and make sure employees know to double-check outputs before relying on them. Update your Acceptable Use Policy to define when and how AI can be used, and what should never be shared.
The fix isn’t to stop using AI, it’s to use it responsibly. With the right awareness and oversight, it can be a powerful accelerator for your organization, not a new entry point for risk.
Market impact.
So far, we’ve seen limited insurance claims directly tied to AI—few losses have been traced solely to AI-enabled social engineering or data exposure from public tools. That’s likely to change as adoption grows. As organizations and employees rely more on AI, related incidents and claim activity are expected to rise.
Reputation risk is also on the rise. AI has made it easier to create convincing but false stories, images, or videos about organizations. Monitoring how your brand appears online and having a clear communications plan in place can help mitigate damage if misinformation spreads. A quick response, backed by prepared templates and defined roles, can make all the difference.
On the insurance side, most cyber policies remain silent on AI-related exposures. For companies developing or selling AI tools, liability coverage typically falls under a Technology Errors & Omissions (Tech E&O) policy. Because AI risks are still difficult to define, carriers are cautious about adding specific coverage language. As real-world claims emerge, expect clearer terms and exclusions to take shape across the market.

Regulation.
Several states, including California, Colorado, Texas, and Utah, have already introduced early AI legislation focused on transparency and accountability, and we expect broader regulation to follow.
Like every major technology shift before it, AI brings both opportunity and risk. It can streamline operations, improve decision-making, and unlock new levels of efficiency, but it can also be used to mislead, manipulate, or expose sensitive information if left unchecked.
The difference between gaining an edge and creating exposure comes down to how you manage it. Setting clear boundaries, training your team, and aligning cybersecurity with responsible AI use can help your organization take advantage of the technology without opening the door to new risks.
Yes/And: Our Take
Yes, AI brings incredible potential to streamline operations and drive innovation, and it also demands new levels of vigilance. From updating acceptable use policies to training employees on deepfake awareness, organizations need to approach AI as both a business tool and a risk factor.
At M3, we help clients navigate this dual reality, strengthening cybersecurity strategies, aligning policies with evolving technologies, and ensuring your insurance program keeps pace with emerging AI-driven threats.
The AI threat landscape is changing fast. Connect with your M3 Client Executive to make sure your organization is ready for what’s next.
Defend against
AI-driven
cyber attacks.
All month long, follow along with insights sights on today’s biggest cyber risks. From third-party exposures, AI, litigation trends, and more. Each week builds toward our feature event: a one-hour webinar with Arctic Wolf and M3 Insurance on AI-driven attacks. Learn how AI is transforming cyber threats, and how to protect your organization.