Candid evening commute with smartphones

4 pillars of a responsible AI strategy

Why and how the private sector should proactively establish AI governance.


In brief
  • An organizational structure that accounts for responsible AI roles and responsibilities will help ensure adherence to safe, ethical principles and methods.
  • End-to-end portfolio management will enable oversight of risks as AI use cases grow in numbers and complexity.
  • Responsible AI controls embedded in the software development lifecycle provide touch points to review and reassess risks identified in risk evaluations.

Special thanks to Molly Donovan, Technology Consulting Manager, and William Smith, Technology Consulting Senior Manager, for contributions to this article.

Corporate adoption of artificial intelligence (AI) initiatives has increased drastically due to the recent generative AI wave. At the same time, global AI regulations are continuing to pass and will become enforced for various regional jurisdictions, industries and private sector’s overtime. Academia and public sector guidance are falling behind private sector adoption and expansion of AI usage; therefore, businesses (particularly those in high-risk sectors such as health care and finance) should proactively establish responsible AI policies and AI Governance procedures to help ensure corporate adoption and social responsibility when using and solutioning AI systems with customer and corporate data.

Responsible AI is sector-agnostic, and we are seeing an increase in demand for AI governance across clients. There are unique risks associated with AI, even for the seemingly lowest risk sectors. Even if your corporation is in a field that does not commonly collect or utilize sensitive human data (i.e., PII, PHI), there are still various types of AI risks that can negatively impact your business in the long run if a responsible AI strategy is not adopted.

EY professionals have developed AI governance frameworks, policies, processes and procedures to empower companies to get ahead of the curve by adopting responsible AI strategies for safeguarding against AI bias, potential harms, reputational impact and compliance with evolving regulations.

 

“The why” — importance of responsible AI

Over recent years as rapid AI adoption scaled into generative AI (GenAI) solutions, there has been increased scrutiny by the public on how corporations are designing and training AI with the potential to become biased against humans. Risks are increased particularly when using pre-trained generative AI models where foundational training data and AI model decision making are not easily transparent and could lead to unintended harm to corporations or humans.

 

Academia and legislative bodies around the world are introducing guidelines for ethical and responsible AI; however, the pace at which these guidelines are rolling out and being adopted are not catching up to the pace at which US private sector businesses are innovating and releasing AI products to global consumers. By 2026, Gartner predicts 50% of governments worldwide will enforce use of responsible AI through regulations, policies and the need for data privacy.¹

 

To protect against potential damages to an organization, it is no longer enough for a company to publicly adopt and value responsible AI principles. It is important to help ensure companies take action, guided by the principles, within the organization by setting up AI governance bodies and incorporating new policies and procedures within data science organizations to begin adopting responsible ways of designing, producing and maintaining AI systems. It is equally important to educate the people who are funding, designing, building and delivering AI solutions about the tangible actions that should be adopted in order to transform the organization into a responsible yet innovative corporation.

“The how” — main pillars to responsible AI

AI is not new, and it’s very likely that organizations already have some level of responsible AI practices integrated into their existing AI processes, often with a focus on AI ethics within data science teams. However, an organization’s AI landscape includes a wider range of business functional and technical components that should have responsible AI activities and policies incorporated.

Every company is unique in how their IT organizations are structured and level of AI operations maturity. Based on Forrester Predictions 2025, 40% of highly regulated enterprises will combine data and AI governance. The complexity of AI governance, already intense due to rapid technological innovation and the absence of universal templates, standards, or certifications, is set to increase further.² Leading practice for understanding how an organization can expand Responsible AI begins with evaluating the existing functions around AI portfolio management, IT Governance, AI ethics, AI risks, legal, privacy, and security. At a high level, these functions should already exist for traditional software development and data analytics. As companies continue to adopt advanced analytics, machine learning, and GenAI, updates to corporate governance policies and procedures should be enhanced with the EY pillars of responsible AI strategy.

Pillar 1 - Scalable GenAI governance structure

Adapting existing IT or AI governance structures to encompass newer AI risks on top of existing data privacy and traditional IT risk policies will establish a good foundation for responsible AI. Sixty-seven percent of business leaders say more work is needed to address the social, ethical and criminal risks inherent in the new AI-fueled future.³ Defining clear roles and responsibilities for evaluating the company’s risk profile based on industry sector will begin informing the focused AI-related risks to incorporate into responsible AI risk assessments and policies.

A robust structure for AI governance is derived of three lines of defense:

  • First line — technical teams who actively manage risks through designing, building, deploying and monitoring AI solutions.
  • Second line — functional teams with oversight into AI portfolio and AI leadership decisions identify emerging risks through review and assessments of AI solutions.
  • Third line — IT risk and audit teams independent from AI operations hold the first and second lines of defense accountable for complying with government regulations, IT and corporate Responsible AI policies.

Due to the ever-evolving landscape of risks associated with AI, companies should now consider creating new responsible AI roles within their first and second lines of defense teams with a core focus on driving responsible AI activities. These proposed responsible AI roles would require a specific skill set across technology, business, risk and compliance.

Pillar 2 - AI portfolio intake and risk evaluation

Establishing a comprehensive AI portfolio management intake process will help mitigate the chance of negative impacts to the business caused by high-risk AI solutions. AI portfolio management and governance teams should employ initial use case selection frameworks and AI Risk Tiering assessments at the onset of a new AI idea. When a business function within the company proposes a beneficial case for utilizing AI as a process improvement or business intelligence aid, each idea should funnel through a formal AI portfolio intake evaluation.

The evaluation will encompass information that enable AI stakeholders to determine the validity of a use case across technical complexity (i.e., Is existing technology and data ready to support the solution?), business value (i.e., How will this solution impact end users and revenue growth, and how can we measure benefits?), and Risks (i.e., What are the potential ethics, compliance, regulatory, privacy and security risks associated with this use case?)

If a use case moves forward with approval and the project is funded, the next step is to classify the use case into an AI risk tier. EY risk tiering consists of several dimensions of risk domain assessments to determine how risky a given use case will be once live in production.

Ethical, legal and privacy risks: Accounts for data privacy, fairness, bias and regulatory risks by assessing the use case’s utilization of AI model type, corporate intellectual property, and sensitive personal data.

Data, algorithmic and development risks: Accounts for technical complexity by assessing risks based on input data, tech stack, model utilized, model output and development methodologies.

Business risks: Accounts for potential harms to the organization by assessing impact to revenue, customer experience, regulatory compliance and corporate reputation.

The outcome of a use case’s risk tier classification will determine two things:

  1. Specific RAI controls and model monitors to be implemented with the AI system in production. Monitors can include tracking of performance accuracy, biases, end-user misuse, adversarial anomalies and more. RAI controls can include governance activities such as tracking use case compliance against data privacy, data retention, security, IT, and/or applicable legal policies.
  2. The ongoing frequency of AI system reviews and evaluation from various groups within the AI governance structure’s three lines of defense throughout the AI system’s lifespan in production.

Pillar 3 - Solution development lifecycle embedded Responsible AI

Every phase in the AI system development lifecycle has potential risks that should have proper oversight. Developing with responsible AI considerations across all projects will enable companies to produce inherently risk-adverse AI systems. There are various categories of risk that exist across the AI lifecycle:

  • Use case initiation phase: design risks
  • Data acquisition and preparation: data risks
  • Model training, experimentation and validation: algorithmic risks
  • Deployment and monitoring: performance risks

The EY Responsible AI framework guides development teams to check and analyze for the specific risks that could occur during each phase. Risk-mitigation activities should be incorporated as part of the standard solution development lifecycle to address the nuanced risks that arise for a specific business problem.

For example, if the data set required for training AI models is not readily available due to lack of digitization, disorganized data domains, or scattered ownership of data across various business units, etc., then analysis of how imbalanced a data set is before moving forward with model training is a critical part of mitigating data and algorithmic risks. It is also important to help ensure transparency into what kinds of data set was utilized for model training, as it informs the potential scenarios of bias and inaccuracy that can occur from the AI system’s outputs. Depending on the business problem the AI system is intended to solve, biased outputs could be statistical, resulting in performance risks such as inaccurate insights given to the business. Social biases can also occur with AI systems that utilize human demographic data, such as those commonly employed in HR functions.

Pillar 4 - Proactive monitoring and controls

As an AI program scales, risk mitigation monitors will need to be identified and streamlined. Specific monitors should be configured based on use case risks identified. Adhering to safe AI practices requires monitoring across several areas to account and adjust for potential risks, many of which have been heightened by the advent of GenAI. A selection of key risk categories are noted below:

Hallucination: Generation of outputs or conclusions by an AI system that are not grounded in its training data or input provided, leading to potentially incorrect, nonsensical or harmful responses.

  • Deter the model from producing unfounded or imaginary content.
  • Set parameters to screen content outside of prescribed use case.
  • Create metrics to identify anomalies across various dimensions that might signal hallucination

Data Leakage: The unintentional exposure or sharing of sensitive or confidential data, either through the AI model’s training data, predictive outputs or metadata, which may lead to privacy violations or security threats.

  • Safeguard user confidentiality by vigilantly monitoring and controlling data output.
  • Prevent inadvertent revelation of sensitive information, fortifying user privacy and security.

Prompt Injection: The intentional manipulation of the instruction or query given to an AI model with the aim to trick it into producing harmful, misleading or inappropriate responses, bypassing built-in safeguards.

  • Guard against attempts to manipulate the model into bypassing its own safety protocols.
  • Provide a basis for refining the robustness of the model’s safeguards, helping ensure it remains impervious to exploitation.
  • Helping ensure model functions around the prescribed use case parameters

Toxicity: Harmful or offensive content generated by an AI system, whether in response to a specific input or on its own, that could cause harm, distress or discomfort to individuals or groups.

  • Proactively identify and mitigate “toxicity,” defined as the generation of harmful, offensive or inappropriate content.
  • Systematic reinforcement of content moderation protocols.
  • Preemptively neutralize content that could undermine user wellbeing or violate platform guidelines.

Automating the controls for the above categories of AI model risks can be done both proactively and retroactively. Proactive controls can include measures such as scanning and gating against toxic language, data leakage, prompt injection or setting thresholds for context variance from use case purpose. These risks can be monitored at both the input and the output level. Retroactive monitors can reveal past data and model performance to identify areas of improvement, reoccurring issues, and emerging risks. Automating responsible AI controls enhances the ability to provide continuous, efficient monitoring and can be done at the level of the AI platform architecture and in post launch data reviews.

The success of a responsible AI program lies in finding the intersection between technical controls and functional processes. To develop responsible, scalable and successful AI systems, data science and technology teams must follow guidelines and regulations set up by AI governance and business functional bodies. Meanwhile, governance bodies must collaborate with technical teams to continuously refine these rules, based on emerging risks and changing societal context. Through this collaborative model, AI can be utilized in an ethical and beneficial manner, contributing positively to business strategies, customer trust and brand reputation.


Summary 

Integrating ERM and resiliency is not just about surviving in the face of adversity but thriving despite it. It requires a forward-looking approach, proactive planning and the ability to strategically adapt quickly to changing circumstances. External benchmarking and the five tools provided by ERM are critical components in this integration, offering a pathway to a more agile and resilient enterprise. As organizations navigate this journey, they will discover that they are adept at managing risks and seizing opportunities. This proactive stance redefines resilience as a dynamic capability enabling organizations to actively shape their future in an unpredictable business landscape

About this article

Authors

Related articles

‘Braking’ the risk speed limit: move fast, confidently

Discover how EY's AI-enabled platforms provide the 'brakes' for risk management, enabling organizations to innovate rapidly with confidence and control.

How to reimagine your TPRM program with GenAI and scalable operations

Transform third-party risk management with GenAI for enhanced coverage, streamlined processes and predictive analytics in a tech-led era. Learn more.

How to embrace AI in risk management

Discover how the rise of GenAI promises both unprecedented opportunities and new challenges for risk management.