When ChatGPT first got here out, I requested a panel of CISOs what it meant for his or her cybersecurity applications. They acknowledged impending adjustments, however mirrored on previous disruptive applied sciences, like iPods, Wi-Fi entry factors, and SaaS functions coming into the enterprise. The consensus was that safety AI can be an analogous disrupter, so that they agreed that 80% (or extra) of AI safety necessities had been already in place. Safety fundamentals comparable to sturdy asset stock, knowledge safety, id governance, vulnerability administration, and so forth, would function an AI cybersecurity basis.
Quick-forward to 2025, and my CISO mates had been proper — kind of. It’s true {that a} strong and complete enterprise safety program acts as an AI safety anchor, however the different 20% is more difficult than first imagined. AI functions are quickly increasing the assault floor whereas additionally extending the assault floor to third-party companions, in addition to deep throughout the software program provide chain. This implies restricted visibility and blind spots. AI is usually rooted in open supply and API connectivity, so there’s seemingly shadow AI exercise in all places. Lastly, AI innovation is transferring quickly, making it exhausting for overburdened safety groups to maintain up.
Other than the technical facets of AI, it’s additionally value noting that many AI initiatives finish in failure. In response to analysis from S&P World Market Intelligence, 42% of companies shut down most of their AI initiatives in 2025 (in comparison with 17% in 2024). Moreover, practically half (46%) of companies are halting AI proof-of-concepts (PoCs) earlier than they even attain manufacturing.
Why do so many AI initiatives fail? Trade analysis factors to price, poor knowledge high quality, lack of governance, expertise gaps, and scaling points, amongst others.
With initiatives failing and a potpourri of safety challenges, organizations have an extended and rising to-do checklist in relation to making certain a strong AI technique for innovation and safety. After I meet my CISO amigos as of late, they typically stress the next 5 priorities:
1. Begin every little thing with a robust governance mannequin
To be clear, I’m not speaking about know-how or safety alone. The truth is, the AI governance mannequin should start with alignment between enterprise and know-how groups on how and the place AI can be utilized to help the organizational mission.
To perform this, CISOs ought to work with CIO counterparts to teach enterprise leaders, in addition to enterprise capabilities comparable to authorized groups, finance, and so forth., to set up an AI framework that helps enterprise wants and technical capabilities. Frameworks ought to comply with a lifecycle from conception to manufacturing, and embrace moral concerns, acceptable use insurance policies, transparency, regulatory compliance, and (most significantly) success metrics.
On this effort, CISOs ought to overview present frameworks such because the NIST AI Danger Administration Framework, ISO/IEC 42001:2023, UNESCO suggestions on the ethics of synthetic intelligence, and the RISE (analysis, implement, maintain, consider) and CARE (create, undertake, run, evolve) frameworks from RockCyber. Enterprises could have to create a “better of” framework that matches their particular wants.
2. Develop a complete and steady view of AI dangers
Getting a deal with on organizational AI dangers begins with the fundamentals, comparable to an AI asset stock, software program payments of fabric, vulnerability and publicity administration finest practices, and an AI danger register. Past primary hygiene, CISOs and safety professionals should perceive the superb factors of AI-specific threats comparable to mannequin poisoning, knowledge inference, immediate injection, and so forth. Menace analysts might want to sustain with rising ways, strategies, and procedures (TTPs) used for AI assaults. MITRE ATLAS is an effective useful resource right here.
As AI functions prolong to 3rd events, CISOs will want tailor-made audits of third-party knowledge, AI safety controls, provide chain safety, and so forth. Safety leaders should additionally take note of rising and sometimes altering AI rules. The EU AI Act is probably the most complete so far, emphasizing security, transparency, non-discrimination, and environmental friendliness. Others, such because the Colorado Synthetic Intelligence Act (CAIA), could change quickly as shopper response, enterprise expertise, and authorized case regulation evolves. CISOs ought to anticipate different state, federal, regional, and business rules.
3. Take note of an evolving definition of information integrity
You’d assume this may be apparent, as confidentiality, integrity, and availability make up the cybersecurity CIA triad. However within the infosec world, knowledge integrity has targeted on points comparable to unauthorized knowledge modifications and knowledge consistency. These protections are nonetheless wanted, however CISOs ought to increase their purview to incorporate the information integrity and veracity of the AI fashions themselves.
For example this level, listed below are some notorious examples of information mannequin points. Amazon created an AI recruiting instrument to assist it higher type via resumes and select probably the most certified candidates. Sadly, the mannequin was principally skilled with male-oriented knowledge, so it discriminated towards girls candidates. Equally, when the UK created a passport picture checking utility, its mannequin was skilled utilizing individuals with white pores and skin, so it discriminated towards darker skinned people.
AI mannequin veracity isn’t one thing you’ll cowl as a part of a CISSP certification, however CISOs have to be on high of this as a part of their AI governance tasks.
4. Try for AI literacy in any respect ranges
Each worker, companion, and buyer will likely be working with AI at some stage, so AI literacy is a excessive precedence. CISOs ought to begin in their very own division with AI fundamentals coaching for your entire safety crew.
Established safe software program improvement lifecycles ought to be amended to cowl issues comparable to AI risk modeling, knowledge dealing with, API safety, and so forth. Builders also needs to obtain coaching on AI improvement finest practices, together with the OWASP High 10 for LLMs, Google’s Safe AI Framework (SAIF), and Cloud Safety Alliance (CSA) Steering.
Finish person coaching ought to embrace acceptable use, knowledge dealing with, misinformation, and deepfake coaching. Human danger administration (HRM) options from distributors comparable to Mimecast could also be essential to sustain with AI threats and customise coaching to completely different people and roles.
5. Stay cautiously optimistic about AI know-how for cybersecurity
I’d categorize at this time’s AI safety know-how as extra “driver help,” like cruise management, than autonomous driving. Nonetheless, issues are advancing shortly.
CISOs ought to ask their workers to determine discrete duties, comparable to alert triage, risk searching, danger scoring, and creating reviews, the place they may use some assist, after which begin to analysis rising safety improvements in these areas.
Concurrently, safety leaders ought to schedule roadmap conferences with main safety know-how companions. Come to those conferences ready to debate particular wants somewhat than sit via pie-in-the-sky PowerPoint shows. CISOs also needs to ask distributors straight about how AI will likely be used for present know-how tuning and optimization. There’s a whole lot of innovation happening, so I imagine it’s value casting a large web throughout present companions, rivals, and startups.
A phrase of warning nonetheless, many AI “merchandise” are actually product options, and AI functions are useful resource intensive and costly to develop and function. Some startups will likely be acquired however many could burn out shortly. Caveat emptor!
Alternatives forward
I’ll finish this text with a prediction. About 70% of CISOs report back to CIOs at this time. I imagine that as AI proliferates, CISOs reporting constructions will change quickly, with extra reporting on to the CEO. People who take a management position in AI enterprise and know-how governance will seemingly be the primary ones promoted.