Global Trustee and Fiduciary Services Bite-Sized Issue 9 2025
4 QUICK LINKS CRYPTOASSETS CYBERSECURITY EMIR (UK) FINTECH MIFID/MIFIR (UK) SUSTAINABLE FINANCE/ESG T+1 ASIA PACIFIC AUSTRALIA EUROPE LUXEMBOURG NORTH AMERICA UNITED KINGDOM Global Trustee and Fiduciary Services Bite-Sized | Issue 9 | 2025 The consultation period closed on 12 September 2025. After considering responses, the final Q&As will be published on FCA’s/BoE’s respective websites. Link to FCA Consultation Page here Link to BoE Consultation Page here FINTECH Project Noor: Explaining AI Models for Financial Supervision On 18 August 2025, the Bank for International Settlements (BIS) announced the launch of Project Noor. In its overview, BIS explains that Project Noor is an initiative of the BIS Innovation Hub that seeks to equip financial supervisors with independent, practical tools to evaluate and interpret the inner workings of artificial intelligence (AI) models used by banks and other financial institutions. By combining explainable AI methods with risk analytics, BIS says that the project aims to deliver a prototype through which supervisors can verify model transparency, assess fairness, and test robustness. Why Noor? BIS says that AI models now help approve mortgages, set card limits, and flag potential fraud in real time. While these services appear seamless, understanding why a model said yes, no, or “flag for review” can still feel opaque. Clear, human-readable explanations can strengthen confidence and help keep digital finance fair for everyone. BIS states that new regulations demand that high-risk financial AI be explainable and auditable. But there is no common, practical playbook for supervisors. What is Noor? Led by the BIS Innovation Hub Hong Kong Centre in collaboration with the Hong Kong Monetary Authority and the UK Financial Conduct Authority, BIS says that Project Noor will prototype the latest Explainable AI (XAI) techniques in a controlled setting. XAI converts complex model logic into plain language and intuitive visuals, making it easier to see which factors influenced a decision and how sensitive that decision is to change, all while preserving privacy. What this prototype could mean in everyday terms? • Greater transparency - Customers receive clearer reasons for credit decisions or fraud alerts. • Consistent protection - Supervisors gain modern tools to check that similar customers are treated consistently. • Responsible innovation - Banks can adopt new technologies with practical, privacy-preserving explainability checks. Lastly, BIS states that it is important to note that financial institutions retain responsibility for model explainability, and that Noor does not aim to prescribe definitive standards or replace existing practices. Instead, Noor strives to equip supervisors with methods and benchmarks to form their own informed opinions. Link to BIS Announcement here AI Live Testing: The Use of AI in UK financial Markets - From Promise to Practice On 1 August 2025, the Financial Conduct Authority’s (FCA) Head of Department, Advanced Analytics and Data Science Unit, and also the Manager of Artificial Intelligence (AI) Strategy, spoke about the intent for the UK to be a place where beneficial technological innovation can thrive to support growth. They then asked the question - how can the FCA build confidence in AI so consumers and markets benefit? They explain that as AI is reshaping industries, including financial services, it has the potential to transform decision-making and customer experiences in UK financial services for the better. But it also raises concerns about how it can be used safely and responsibly. This can slow the pace of innovation if left unanswered as well as introduce new risks.
Made with FlippingBook
RkJQdWJsaXNoZXIy MTM5MzQ2Mw==