Artikel
Generative AI in Financial Services: Eight Risks and How to Overcome Them
Leading companies can control the chaos of generative AI requests through smart risk segmentation.
Artikel
Leading companies can control the chaos of generative AI requests through smart risk segmentation.
Within most banks, insurers, and other financial services companies, risk and control groups have been overwhelmed with a large and growing volume of requests to deploy generative AI for different use cases. To address this wave of requests, leading companies have found that an effective response is to categorize the risks and work with other departments to develop mitigation strategies for each category.
Vendor risk, for instance, can be tackled through controls developed by a cross-functional team consisting of IT, procurement, and risk professionals.
Here's a quick look at each risk and how leading companies are addressing them.
Inadequate or inappropriate practices, strategies, or frameworks on data ownership, security, and privacy may compromise data integrity.
Mitigation tactics: Implement well-governed data management practices that surveil data uses and protect data privacy.
Decisions are based on inaccurate or misused models, or lack of transparency in AI models.
Mitigation tactics: Apply regulatory expectations on model risk to AI use cases based on criticality and materiality.
Vendors fail to adhere to contractual stipulations, which can disrupt operations.
Mitigation tactics: Conduct due diligence on technology and software partners, onboard and monitor them, and ensure service-level agreements are in place.
AI models do not fully integrate with existing technology.
Mitigation tactics: Embed controls into IT and architecture, enhance AI governance, and improve process control with enhanced testing.
Shared access to AI models may compromise data security when there are limited access controls and a lack of filters.
Mitigation tactics: Enhance identity access management and the use of virtual private cloud to protect risk data and models.
Businesses fail to comply with laws and regulations, so the embedded bias in AI model training may compromise output.
Mitigation tactics: Cross-functional teams should carefully select use cases. Identify and test data elements for potential bias.
Negative stakeholder perception may result in a loss of trust or value.
Mitigation tactics: Create a stakeholder management plan, with program management and change management. Define escalation protocols and prepare communications scripts.
Failing to mobilize around AI can reduce shareholder value, making non-adoption a strategic threat.
Mitigation tactics: Create board awareness, a clear AI strategy, and a plan to capture value.
This set of mitigation strategies will reduce most of the risks, leaving only a small set of residual risks for the company to deal with. Spending time up front on mitigation strategies is far more effective than responding to each new AI request with ad hoc control measures.