A. SREENIVASA REDDY (ABU DHABI)
Experts in the financial sector industry have welcomed the Central Bank of the UAE’s (CBUAE) comprehensive guidelines on the use of artificial intelligence (AI) and machine learning (ML) in the operations of licensed financial institutions, saying the clarity should help balance innovation with risk management and consumer protection.
The CBUAE’s guidance note sets out broad principles to promote responsible innovation, safeguard consumer rights, and strengthen governance and transparency in the financial sector. The framework applies to all licensed banks, insurers and other financial firms operating under the Central Bank’s supervision.
“The CBUAE’s AI guidance is a welcome step, and it gives banks more certainty on what ‘responsible adoption’ looks like, which typically accelerates implementation compared with a more ambiguous regulatory backdrop,” said Mustafa Domanic, Partner at Oliver Wyman’s Financial Services Practice.
Domanic noted that while banks will need to translate principles into proportionate governance, risk management and compliance controls, several implementation challenges are likely to surface quickly.
Many generative AI use cases depend on third-party large language models, where traditional validation approaches are hard to apply, and the guidance’s emphasis on third-party due diligence will become a practical factor in vendor selection, he said.
Domanic added that data representativeness will also be difficult for some applications that rely on non-local datasets in a market with diverse and dynamic customer demographics.
Anamika Jain, Principal for Financial Services at Mercer Middle East, said that regulatory compliance requires banks to pursue innovation within a clearly documented governance framework and that AI accountability should be integrated at the Board and senior management levels.
“Risk management frameworks should integrate AI-related risks, assigning clear roles to risk committees, audit and IT functions. It is essential to embed fairness, transparency, and consumer protection throughout AI development and deployment,” she said, adding that transparency and consumer choice must always be a priority, and clear disclosures and opt-out options are critical for adopting AI strategies fairly and efficiently.
Benjamin Ward, Financial Institutions Leader for Marsh Middle East and North Africa, said licensed institutions will need to demonstrate they have adopted responsible AI practices with evidence-based protocols, controls and protections in place. He cautioned that AI adoption increases exposures to algorithmic error, privacy breaches and potential outcomes that disadvantage customers.
“These are gaps that standard Cyber and Professional Indemnity policies may not cover without specific technology/E&O endorsements, regulatory defence cover and vendor indemnities, which need to be considered,” Ward said. He also warned that poorly explained AI usage may itself present opportunities for errors in outcomes and explanations, undermining consumer trust.
The CBUAE guidance note emphasises that senior management and the Board of Directors are responsible and accountable for AI/ML systems, including model selection, deployment and ongoing monitoring. Licensed financial institutions must ensure AI and ML systems do not produce discriminatory or manipulative outcomes, and systems with such outcomes must not be deployed or must be discontinued. Data used to train models must be accurate, relevant and representative of the customer populations they serve.
Transparency requirements include clear disclosure to customers when they are interacting with AI applications, especially for high-impact decisions, and institutions must be able to explain how AI systems operate and make decisions.
LFIs must maintain documentation on model design, training data and assumptions to facilitate internal review and external audits and provide meaningful information to customers regarding the logic behind AI decisions, with mechanisms for clarification or redress.
The guidance also addresses data management and privacy, mandating compliance with applicable laws, and requiring personal data to be collected, stored and used only for legitimate, proportionate purposes. LFIs are expected to continuously monitor and review AI/ML models, updating or ceasing their use as conditions change, and to maintain clear and accessible channels for complaints and redress.
The note further stipulates that where institutions rely on third-party vendors or cloud service providers for AI/ML solutions, due diligence must be conducted on a provider’s governance, security and data protection practices.
CBUAE Governor Khaled Mohamed Balama said the guidance aims to establish a clear framework for the responsible use of AI and ML in the financial sector in a way that “enhances consumer protection, reinforces governance and transparency principles, and emphasises the importance of human oversight and data protection requirements.”