How can financial institutions navigate AI risks and regulations?: Opinion
Getting the legal structuring right from the get-go can make all the difference for financial institutions looking to take full advantage of AI.
Artificial intelligence (AI) is already having a profound impact on various industries in Singapore, such as monitoring and predicting the spread of Covid-19, and the development of autonomous vehicles as part of the government's smart mobility solutions.
Financial services too have felt the effect of AI, and deployment of this new technology has accelerated rapidly within the industry, supporting a wide range of activities from risk management, customer onboarding and engagement, to trading, portfolio management and robo-advisory.
THE REALITY OF AI IN FINANCE
AI is no longer a 'nice to have' that powers more opportunities for growth and diversification. It has become a necessity for financial institutions to remain competitive and relevant in an increasingly digitalised and technocratic world.
However, certain features of machine-learning deployments may raise legal and regulatory concerns, including when it comes to achieving regulatory compliance and identifying where liabilities fall.
For instance, data privacy and the use of personal information are top-of-mind considerations when it comes to AI implementation, and financial institutions should attach real priority to the governance of data - what data should be used; how should it be modelled and tested; and whether the outcomes derived from the data are correct.
AI is a dynamic and disruptive technology posing unique and ethical challenges. Consequently, the legality and regulation of AI have struggled to keep pace with the speed of innovation.
EVOLVING AI REGULATORY LANDSCAPE
Currently, there is still limited legislation around AI in Singapore, but the deployment of this technology in the financial sector will require careful consideration of existing regulations, given the rigorous regulatory standards already applicable in this sector and the novel features of AI.
For example, there are various existing regulatory frameworks in relation to matters such as governance, control and risk-management, outsourcing, data protection and cybersecurity, against which the compliance of any potential AI deployment will need to be assessed. In addition, more regulation is likely to be on the way as other developed jurisdictions follow the European Union's (EU) lead.
With the financial industry looking to scale its application of AI and machine-learning technologies rapidly, many regulators are focusing on board-level engagement and strong governance principles that will enable regulated firms to deal with challenges posed by these new technologies.
SINGAPORE'S PRINCIPLES-BASED APPROACH
The Monetary Authority of Singapore's (MAS) guidelines on Individual Accountability and Conduct (IAC), which apply to regulated financial institutions, is intended to promote senior managers' individual accountability, strengthen oversight of material risk personnel, and reinforce standards of proper conduct among all employees. In particular, each senior manager's areas of responsibility must be clearly specified to ensure that senior managers are held to account for matters under their purview. A significant hurdle for senior managers is likely to be transparency in AI systems.
The Singapore IAC Regime is likely to be used as a tool for ensuring firms take responsibility for assessing AI-related risks and allocate that responsibility appropriately within the organisation. Firms implementing AI systems need to consider who is ultimately responsible for those systems, both operationally and in terms of their output.
To tackle the specific risks that AI poses, policymakers around the world, including Singapore, are developing initiatives to promote strong governance and risk management for the deployment of AI in financial services, largely via guidance rather than 'hard law'. The MAS released a set of principles in 2018 to promote fairness, ethics, accountability and transparency (known as the FEAT principles) in the use of AI in data analytics, specifically with respect to the finance sector. These principles aim to provide guidance to firms offering financial products and services (including banks and insurers) on the responsible use of AI and data analytics (AIDA), so as to strengthen internal governance around data management and use and promote public confidence in the use of AIDA.
NEW DATA PROTECTION REGIMES
Underpinning many advances in AI is data and we foresee that the data protection focus on AI will extend to financial services in due course, particularly where there is the potential for consumer harm, such as the use of AI to assess eligibility for financial products or insurance premiums.
Recent amendments to Singapore's Personal Data Protection Act 2012 (PDPA) were implemented in phases, with the first batch of amendments coming into force on Feb 1, 2021. While the PDPA does not have a specific provision relating to the use of AI, it sets out a data protection framework on the collection, use and disclosure of personal data by private-sector organisations in Singapore to protect the personal data of individuals, support public trust in the digital economy, and enable innovation in the data space.
In order to collect, use or disclose an individual's personal data, the PDPA requires an organisation to obtain the individual's prior consent, unless an exception or other processing ground under the PDPA applies. Alternative processing grounds to consent have also been introduced, such as the business improvement exception which may be helpful for the use of AI for internal business purposes.
MINIMISING RISK, MAXIMISING OPPORTUNITIES
What then, does this all mean for financial institutions? While understanding the value that AI brings to their business as well as their customers, organisations need to take a progressive and ethical approach to anticipating the future impact of these technologies and be on the front foot for compliance with existing and future AI regulations. This is an ongoing responsibility to be considered at every stage of the technology adoption process. Before firms even apply the AI tool at the design stage, building in compliance by design should be a key objective. And once the tool is adopted, ongoing monitoring procedures and clear communications with customers are essential.
Financial regulators take a tech-neutral approach to enforcing their rulebooks, which means that firms will need to map their new technology against existing law and regulation. They need to fully consider how new products and services fit within the regulatory framework across all relevant jurisdictions, and what this would entail when deployed. Getting the legal and regulatory structuring right from the get-go can make all the difference for financial institutions looking to take full advantage of AI.
- The writer is partner and head of Asia TMT at Linklaters.
Source: Business Times © Singapore Press Holdings Ltd. Permission required for reproduction.