The scope of artificial intelligence (AI) usage in the banking and FinTech sector is increasing, expanding from customer-facing services (such as chatbots, predictive apps, personalised marketing) to internal processes and risk management areas (for example, operational automation, credit scoring, contract intelligence, cyber and fraud risk management). Government agencies and regulators like the Hong Kong Monetary Authority (HKMA) have generally encouraged the adoption of new technology, such as AI, to deliver greater efficiencies and better customer experiences. From what we are seeing in the market, the new virtual banking licences in Hong Kong awarded during 2019 will accelerate the adoption of such technologies into the FinTech and banking sector.
On 1 November 2019, the HKMA, Hong Kong’s central banking institution and banking regulator responsible for licensing and supervising banks in Hong Kong, issued a new circular titled ‘High-level Principles on Artificial Intelligence’ aimed at giving guidance to the banking and FinTech industry on the development and use of AI in the Hong Kong financial services sector.
The HKMA acknowledges that the growing use of AI presents not only opportunities but also new risk management challenges to financial institutions and the FinTech sector, and the guidance is based on industry practices and similar principles formulated by leading overseas authorities. The HKMA expects financial institutions and the FinTech sector generally to take these principles into account when designing and adopting their AI and big data analytics applications.
The circular proposes 12 high-level principles for AI, including the following:
- Governance: Board and senior management should be accountable for the outcome of AI applications and AI-driven decisions and should put in place proper governance frameworks and risk management measures. Noting that some AI applications have self-learning capabilities from experience (for example, via reinforcement learning and deep learning) and may be able to make automated decisions on behalf of their banks, governance of roles, responsibilities and oversight of the development and monitoring of AI applications is needed to ensure appropriate ‘checks and balances’ are in place.
- Ensuring an appropriate level of explainability of AI applications: Adequate measures during the design phase of AI applications should be implemented to ensure an appropriate level of explainability for AI applications (in other words, parties should not look to rely on a ‘black-box’ excuse).
- Using data of good quality: Adoption of an effective data governance framework is required to ensure that the data used are of good quality and relevance. Data quality issues identified should be escalated to the responsible parties for rectification in a timely manner.
- Ensuring auditability of AI applications: Sufficient audit logs, which produce relevant documentation to support investigations when incidents or unfavourable outcomes in AI arise, should be implemented and retained for an appropriate period of time.
- Being ethical, fair and transparent: Measures should be taken to: ensure that AI-driven decisions do not discriminate or unintentionally show bias against any group of consumers; comply with the banks’ corporate values and ethical standards is required; and uphold consumer protection principles. As a measure of transparency, it should be made clear to the consumer, prior to services being provided, that the relevant service is powered by AI technology, and the risks involved should be explained. In this regard, the HKMA wrote to authorized institutions on 3 May 2019 to encourage them to adopt and implement the Ethical Accountability Framework for the collection and use of personal data (the Framework) issued by the Office of the Privacy Commissioner for Personal Data, Hong Kong (PCPD).
- Conducting periodic reviews and on-going monitoring, including effective management of third party vendors: Since AI applications can learn from live data and their model behaviour may therefore change after deployment, periodic reviews and on-going monitoring should be conducted (including re validation of the AI model where appropriate). Where there is reliance on third party vendors to develop AI applications, proper due diligence should be undertaken on these vendors having regard to these AI principles, and effective vendor management controls and periodic reviews should be implemented to manage associated risks.
- Complying with data protection requirements: Considering the data-intensive nature of AI applications, effective data protection measures should be implemented and if personal data is collected and processed by AI applications, this will need to comply with Hong Kong’s Personal Data (Privacy) Ordinance and any other applicable local and overseas regulatory requirements. Where appropriate, sanitised data instead of personally identifiable information should be used. The PCPD has frequently spoken on the intersection of privacy and FinTech, big data and AI issues, including stressing the need for ‘data ethics’ and ‘privacy by design’ frameworks for use of FinTech in the financial services sector.
The HKMA notes that these AI principles will be reviewed periodically to address developments in international regulatory standards and industry developments regarding the use of AI as they rapidly evolve.
We can see that adoption of new technologies, such as AI, in the broad Financial Services sector will not only generate material benefits but also pose a number of risk management challenges. In order to address these challenges, those new technologies will increasingly come under greater regulatory scrutiny to ensure principles of accountability, integrity, fairness and security are maintained, and that overlapping privacy and data protection regulations are complied with.
What is clear with introduction of such regulatory guidelines and principles is the need for strong governance and on-going reviews and monitoring to ensure effective compliance. This is particularly needed as the HKMA has indicated it plans to issue separate guidance on the principles relating to consumer protection aspects involved in the use of AI applications.
There is no overarching international standard or universal set of international principles that govern AI and its application and use in FinTech and Financial Services sector. This means each jurisdiction is developing and implementing their AI products and services having regard to individual regulator guidelines and directions for each jurisdiction. We can see that some of these ‘principles’ are very prescriptive in nature while others are very general. This remains a challenge for multi-national FinTech and financial services organisations working across multiple jurisdictions to implement AI in their businesses in compliance with local law.