infotechlead

Gartner Analyst Mark Horvath Reveals Key AI Risk Management Priorities for CISOs

In 2026, organizations focusing on AI transparency, trust, and security are expected to witness a 50 percent improvement in AI model adoption, business goals, and user acceptance, according to Gartner report.
Gartner analyst Mark HorvathDuring the Gartner Security & Risk Management Summit in London, Mark Horvath, VP Analyst at Gartner, emphasized the necessity for AI TRiSM, stating, “CISOs can’t let AI control their organization. AI requires new forms of trust, risk, and security management (TRiSM) that conventional controls don’t provide.”

“Chief information security officers (CISOs) need to champion AI TRiSM to improve AI results, such as increasing AI model-to-production speed, enabling better governance, or rationalizing AI model portfolio, potentially eliminating up to 80 percent of faulty and illegitimate information.”

Key AI risk management priorities for CISOs include:

# Inventorying AI use within the organization to understand exposure and ensuring appropriate explainability.

# Conducting a formal AI risk education campaign to enhance staff awareness throughout the organization.

# Integrating risk management into model operations to bolster model reliability, trustworthiness, and security.

# Implementing data protection and privacy programs to mitigate internal and shared AI data exposures.

# Adopting specific AI security measures to counter adversarial attacks and ensure resistance and resilience.

AI brings about notable data risks due to the use of sensitive datasets for training AI models, and fluctuations in model outputs and data quality over time may lead to adverse consequences.

Implementing AI TRiSM helps organizations comprehend their AI models’ actions, alignment with original intentions, expected performance, and business value.

AI TRiSM is a collective effort that necessitates education and collaboration across teams, as mentioned by Jeremy D’Hoinne, VP Analyst at Gartner. He added, “CISOs must have a clear understanding of their AI responsibilities within the broader dedicated AI teams, which can include staff from legal, compliance, IT, and data analytics teams.”

Without a robust AI TRiSM program, AI models may inadvertently introduce unexpected risks, resulting in adverse model outcomes, privacy breaches, significant reputational harm, and other detrimental effects.

Latest

More like this
Related

Micron rejigs business to focus on AI-driven demand

Micron Technology announced a strategic reorganization of its business...

Snowflake fuels AI-ready data revolution at Deepak Fertilisers

Deepak Fertilisers and Petrochemicals Corporation (DFPCL), one of India’s...

IBM acquires Hakkoda to boost AI and data transformation capabilities

IBM has acquired Hakkoda Inc., a global data and...

Meta’s Llama 4 vs DeepSeek, OpenAI, and Gemini: How does it perform?

Meta Platforms has unveiled its next-generation LLMs under the...