AI adoption in enterprise organizations has been gathering momentum for a couple of years now. In fact, in a recent study conducted by the International Data Corporation (IDC) of more than 4,000 business leaders responsible for AI decisions, 68% of responders say their organization is using AI today. The pack-leading ‘Frontier Firms’ claim they’re achieving returns 3x higher than slow adopters of AI. This is no longer a question of whether you’re using AI for productivity benefits, or not. Personally, I think the bigger (unsolved) question is whether AI is being adopted responsibly. There are security challenges that CISOs must address at ‘Frontier Firms’. Let’s walk through the main ones.
First Things First… What’s a ‘Frontier Firm’?
Basically, a Frontier Firm is an organization that is leading an internal transformation with an AI-first mindset. These organizations span across geography, industry, and size. Their emphasis to move beyond experimentation to enterprise-scale transformation intends to create competitive advantages in the era of AI. Frontier Firms share these 3 attributes:
- They integrate AI seamlessly into the flow of human ambition, amplifying creativity and accelerating decision-making through everyday workflows. Translation: redesign clunky processes to leverage as much AI as possible, wherever possible, whenever possible.
- They foster innovation through AI driven solutions, empowering everyone from frontline employees to executives to build agentic solutions that address real business challenges. Translation: every employee has an AI assistant, and are encouraged to build more agentic ‘helpers’.
- They prioritize observability at every layer, embedding governance, security, and compliance into all AI systems to ensure visibility, control, and trust as they scale. Translation: we know this stuff sounds scary. We realize you need the tools to know what’s happening, where, and how, so you can prevent some truly bad S!&# from happening.
It’s the third attribute where I believe the entire promise of AI within modern enterprise organizations will either make it, or magnificently implode after a security or privacy breach forces the C-suite to completely re-assess whether their ROI math truly makes sense.
The Security Challenges within Frontier Firms
Frontier Firms that are continuing to drive AI adoption, rapid scaling, or high-tech innovation face a unique set of security challenges where traditional, perimeter-based defense is insufficient. The core security challenges and concerns CISOs have include:
1. Expansion of the attack surface.
The explosive growth of agentic AI means an explosive growth in non-human identities. By itself, this isn’t a security concern. Think of a doubling of your user base… not really an issue if you have good systems that train your new users with the same level of vigilance that helps them be good players in this security “team sport”. When it comes to non-human identities however, where this does raise security concerns for CISOs is that non-human identities tend to inherit broad permissions from their human users. This means they are granted excessive permissions and privileges, and most organizations are simply nowhere close to implementing a formal strategy for managing identities for autonomous agents, bots, or machine-based actors.
2. Weaponization of AI by threat actors.
Is your Frontier Firm busy adopting AI tools to completely revolutionize how it operates? Awesome. Guess what? So are threat actors. When threat actors also leverage AI capabilities, your Frontier Firm will face many of the same security challenges, but on steroids. Everything from supercharged phishing (highly personalized, high-quality phishing lures in any language) that are evolved beyond most organizations’ social engineering awareness campaigns, to deepfakes in Business Email Compromise (BEC) that rely on audio or video impersonation of executives. Threat actors will also use AI to find and exploit weaknesses at a pace that likely outpaces what human teams can patch.
3. Insufficient AI Governance and the Rise of Shadow AI.
By now the baseline level of AI awareness means employees at Frontier Firms have not only explored a bunch of AI tools, they’ve actually developed a preference for which AI tool they would rather use. These employees are therefore more likely to use unsanctioned public AI tools (e.g.: commercial ChatGPT or their personal subscription to Google Gemini or Anthropic) and expose your intellectual property or sensitive corporate information. Lines of business frustrated with security red tape and under pressure to aggressively implement AI tooling within their business processes could procure and implement their own AI technologies without appropriate oversight. A legitimate concern for CISOs is that when either of these scenarios occur, their SecOps teams will be forced to play reactive whack-a-mole, rather then be backed by guardrails and a proactive AI governance posture that mitigates security risks for Frontier Firms.
4. Too many security tools.
It’s not uncommon for enterprise organizations to have dozens of tools within their security stack. Is there a benefit in not becoming dependent on a single vendor ecosystem? I think the answer is often yes, but in practical observation, most organizations that try to avoid that ‘single point of failure’ tend to create a messy patchwork of solutions that are poorly integrated. Where the alert noise is debilitating. Where important alerts get missed by teams woefully understaffed to deal with standard fare issues let alone the emerging risks that AI at scale brings. CISOs of Frontier Firms will need to continue their momentum to trim their security stack, to empower their SOC and SecOps teams with the security tools that help them be more effective. I am personally glad that Microsoft continues to invest in this real problem, by improving current tools and launching new ones that address many of the integration and interoperability issues I’ve seen create more problems than they solve.
What’s the horizon looking like?
I believe the list of challenges I’ve described above is just the beginning. I think the reality will continue to evolve, and the most practical thing CISOs can do at this time is to establish or completely redesign their AI governance. In a practical sense this likely requires partnering closely with risk and compliance teams to lead the implementation of a comprehensive AI governance framework. Thanks for reading, and please reach out if you’d like to discuss the practical next steps you can take… or if you have a question or just want to chat more!
