As companies incorporate cybersecurity into governance, threat and compliance (GRC), it is very important revisit present GRC applications to make sure that the rising use and dangers of generative and agentic AI are addressed so companies proceed to fulfill regulatory necessities.
“[AI] It’s a vastly disruptive expertise in that it’s not one thing you’ll be able to put right into a field and say ‘properly that’s AI’,” says Jamie Norton, member of the ISACA board of administrators and CISO with the Australian Securities and Funding Fee (ASIC).
It’s onerous to quantify AI threat, however information as to how the adoption of AI expands and transforms a company’s threat floor gives a clue. Based on Examine Level’s 2025 AI safety report, 1 in each 80 prompts (1.25%) despatched to generative AI providers from enterprise gadgets had a excessive threat of delicate information leakage.
CISOs have the problem to maintain tempo with enterprise calls for for innovation whereas securing AI deployments with these dangers in view. “With their pure safety hat on, they’re attempting to cease shadow AI from turning into a cultural factor the place we are able to simply undertake and use it [without guardrails],” Norton tells CSO.
AI will not be a typical threat, so how do GRC frameworks assist?
Governance, threat and compliance is an idea that originated with the Open Compliance and Ethics Group (OCEG) within the early 2000s as a solution to outline a set of essential capabilities to deal with uncertainty, act with integrity, and guarantee compliance to assist organizational aims. Since then, GRC has developed from guidelines and checklists targeted on compliance to a broader strategy of managing threat. Information safety necessities, the rising regulatory panorama, digital transformation efforts, and board-level focus have pushed this shift in GRC.
On the similar time, cybersecurity has develop into a core enterprise threat and CISOs have helped guarantee compliance with regulatory necessities and set up efficient governance frameworks. Now as AI expands, there’s a necessity to include this new class of threat into GRC frameworks.
Nevertheless, trade surveys counsel there’s nonetheless a protracted solution to go for the guardrails to meet up with AI. Solely 24% of organizations have totally enforced enterprise AI GRC insurance policies, in line with the 2025 Lenovo CIO playbook. On the similar time, AI governance and compliance is the primary precedence, the report discovered.
The trade analysis means that CISOs might want to assist strengthen AI threat administration as a matter of urgency, pushed by management’s starvation to understand some pay-off with out transferring the danger dial.
CISOs are in a tricky spot as a result of they’ve a twin mandate to extend productiveness and leverage this highly effective rising expertise, whereas nonetheless sustaining governance, threat and compliance obligations, in line with Wealthy Marcus, CISO at AuditBoard. “They’re being requested to leverage AI or assist speed up the adoption of AI in organizations to realize productiveness positive aspects. However don’t let or not it’s one thing that kills the enterprise if we do it fallacious,” says Marcus.
To assist risk-aware adoption of AI, Marcus’ recommendation is for CISOs to keep away from going alone and foster broad belief and buy-in to threat administration throughout the group. “The actually necessary factor to achieve success with managing AI threat is to strategy the scenario with a collaborative mindset and broadcast the message to people that we’re all in it collectively and also you’re not right here to sluggish them down.”
This strategy ought to assist encourage transparency about how and the place AI is getting used throughout the group. Cybersecurity leaders should attempt to get visibility by establishing a safety course of operationally that can seize the place AI’s getting used at present or the place there’s an rising request for brand spanking new AI, says Norton.
“Each single product you’ve received nowadays has some AI and there’s not one governance discussion board that’s choosing all of it up throughout the spectrum of various kinds [of AI],” he says.
Norton suggests CISOs develop strategic and tactical approaches to outline the various kinds of AI instruments, seize the relative dangers, and steadiness potential pay-off in productiveness and innovation. Tactical measures corresponding to safe by design processes, IT change processes, shadow AI discovery applications or risk-based AI stock and classification are sensible methods to cope with the smaller AI instruments. “The place you may have extra day-to-day AI — that little bit of AI sitting in some product or some SaaS platform, which is rising in every single place — this is likely to be managed by means of a tactical strategy that identifies what [elements] want oversight,” Norton says.
The strategic strategy applies to the massive AI modifications which are coming with main instruments corresponding to Microsoft Copilot and ChatGPT. Securing these ‘large ticket’ AI instruments utilizing inner AI oversight boards is considerably simpler than securing the plethora of different instruments which are including AI.
CISOs can then focus their assets on the highest-impact dangers in a manner that doesn’t create processes which are unwieldy or unworkable. “The concept is to not bathroom this down in order that it’s nearly not possible to get something, as a result of organizations usually need to transfer shortly. So, it’s extra of a comparatively light-weight course of that applies this consideration [of risk] to both permit AI or be used to forestall it if it’s dangerous,” Norton says.
Finally, the duty is for safety leaders to use a safety lens to AI utilizing governance and threat as a part of the broader GRC framework within the group. “Numerous organizations may have a chief threat officer or somebody of that nature who owns the broader threat throughout the atmosphere, however safety ought to have a seat on the desk,” Norton says. “As of late, it’s not about CISOs saying ‘sure’ or ‘no’. It’s extra about us offering visibility of the dangers concerned in doing sure issues after which permitting the group and the senior executives to make selections round these dangers.”
Adapting present frameworks with AI threat controls
AI dangers embrace information security, misuse of AI instruments, privateness concerns, shadow AI, bias and moral concerns, hallucinations and validating outcomes, authorized and reputational points, and mannequin governance to call just a few.
AI-related dangers needs to be established as a definite class throughout the group’s threat portfolio by integrating into GRC pillars, says Dan Karpati, VP of AI applied sciences at Examine Level. Karpati suggests 4 pillars:
- Enterprise threat administration defines AI threat urge for food and establishes an AI governance committee.
- Mannequin threat administration screens mannequin drift, bias and adversarial testing.
- Operational threat administration contains contingency plans for AI failures and human oversight coaching.
- IT threat administration contains common audits, compliance checks for AI methods, governance frameworks and aligning with enterprise aims.
To assist map these dangers, CISOs can have a look at the NIST AI Threat Administration Framework and different frameworks, corresponding to COSO and COBIT, and apply their core ideas — governance, management, and threat alignment — to cowl AI traits corresponding to probabilistic output, information dependency, opacity in resolution making, autonomy, and speedy evolution. An rising benchmark, ISO/IEC 42001 gives a structured framework for AI for oversight and assurance that’s supposed to embed governance and threat practices throughout the AI lifecycle.
Adapting these frameworks gives a solution to elevate AI threat dialogue, align AI threat urge for food with the group’s overarching threat tolerance, and embed sturdy AI governance throughout all enterprise items. “As a substitute of reinventing the wheel, safety leaders can map AI dangers to tangible enterprise impacts,” says Karpati.
AI dangers can be mapped to the potential for monetary losses from fraud or flawed decision-making, reputational injury from information breaches, biased outcomes or buyer dissatisfaction, operational disruption from poor integration with legacy methods and system failures, and authorized and regulatory penalties. CISOs can make the most of frameworks like FAIR (issue evaluation of knowledge threat) to evaluate the chance of an AI-related occasion, estimate loss in financial phrases, and entry threat publicity metric. “By analyzing dangers from each qualitative and quantitative views, enterprise leaders can higher perceive and weigh safety dangers towards monetary benchmarks,” says Karpati.
As well as, with rising regulatory necessities, CISOs might want to monitor draft laws, observe requests for remark durations, have early warnings about new requirements, after which put together for implementation earlier than ratification, says Marcus.
Tapping into trade networks and friends can assist CISOs keep throughout threats and dangers as they occur, whereas reporting capabilities in GRC platforms monitor any regulatory modifications. “It’s useful to know what dangers are manifesting within the subject, what would have protected different organizations, and collectively constructing key controls and procedures that can make us as an trade extra resilient to a lot of these threats over time,” Marcus says.
Governance is a essential a part of the broader GRC framework and CISOs have an necessary function in setting the organisational guidelines and ideas for a way AI is used responsibly.
Growing governance insurance policies
Along with defining dangers and managing compliance, CISOs are having to develop new governance insurance policies. “Efficient governance wants to incorporate acceptable use insurance policies for AI,” says Marcus. “One of many early outputs of an evaluation course of ought to outline the foundations of the highway to your group.”
Marcus suggests a stoplight system — pink, yellow, inexperienced — that classifies AI instruments to be used, or not, throughout the enterprise. It gives clear steering to workers, permits technically curious workers a protected house to discover whereas enabling safety groups to construct detection and enforcement applications. Importantly, it additionally let safety groups supply a collaborative strategy to innovation.
‘Inexperienced’ instruments have been reviewed and authorized, ‘yellow’ require extra evaluation and particular use circumstances, and people labelled ‘pink’ lack the required protections and are prohibited from worker use.
At AuditBoard, Marcus and the staff have developed an ordinary for AI instrument choice that features defending proprietary information and retaining possession of all inputs and outputs amongst different issues. “As a enterprise, you can begin to develop the requirements you care about and use these as a yardstick to measure any new instruments or use circumstances that get introduced to you.”
He recommends CISOs and their groups outline the guiding ideas up entrance, educate the corporate about what’s necessary and assist groups self-enforce by filtering out issues that don’t meet that commonplace. “Then by the point [an AI tool] will get to the CISO, folks have an understanding of what the expectations are,” Marcus says.
Relating to particular AI instruments and use circumstances, Marcus and the staff have developed ‘mannequin playing cards’, one-page paperwork that define the AI system structure together with inputs, outputs, information flows, supposed use case, third events, and the way the information for the system is educated. “It permits our threat analysts to guage whether or not that use case violates any privateness legal guidelines or necessities, any safety finest practices and any of the rising regulatory frameworks that may apply to the enterprise,” he tells CSO.
The method is meant to determine potential dangers and have the ability to talk these to stakeholders throughout the group, together with the board. “In case you’ve evaluated dozens of those use circumstances, you’ll be able to pick the widespread dangers and customary themes, combination these after which provide you with methods to mitigate a few of these dangers,” he says.
The staff can then have a look at what compensating controls might be utilized, how far they are often utilized throughout completely different AI instruments and supply this steering to the manager. “It shifts the dialog from a extra tactical dialog about this one use case or this one threat to extra of a strategic plan for coping with the ‘AI dangers’ in your group,” Marcus says.
Jamie Norton warns that now the shiny interface on AI is quickly accessible to everybody, safety groups want to coach their deal with what’s taking place underneath the floor of those instruments. Making use of strategic threat evaluation, using threat administration frameworks, monitoring compliance, and growing governance insurance policies can assist CISOs information the group in its AI journey.
“As CISOs, we don’t need to get in the way in which of innovation, however we’ve to place guardrails round it in order that we’re not charging off into the wilderness and our information is leaking out,” says Norton.