Close

HEADLINES

Headlines published in the last 30 days are listed on SLW.

How Singapore is charting its path amid AI regulatory fragmentation: Opinion

How Singapore is charting its path amid AI regulatory fragmentation: Opinion

Source: Business Times
Article Date: 28 Oct 2025
Author: Yeong Zee Kin

The country's targeted approach aims to protect society, foster competitiveness and encourage experimentation.

When artificial intelligence (AI) first took centre stage a decade ago, headlines were dominated by concerns over bias and discrimination, as well as the possibility that the technology might outpace our ability to govern it. In response, a wave of responsible AI (RAI) frameworks emerged to guide safe and trustworthy deployment.

While AI has driven business efficiency and transformed how we live and work, its risks cannot be ignored. As President Tharman Shanmugaratnam noted at Asia Tech x Singapore 2025 in May, we must “view the good with the bad” – maximising AI’s benefits while managing its perils.

Today, the OECD.AI Policy Navigator lists more than 1,300 international and national AI policy initiatives, nearly all incorporating RAI elements. Singapore is no exception, advancing its own National AI Strategy and developing RAI frameworks, including the Monetary Authority of Singapore’s Feat (Fairness, Ethics, Accountability and Transparency) Principles and the Infocomm Media Development Authority’s Model AI Governance Framework – now extended to cover generative AI.

As efforts to regulate AI accelerate, what began as voluntary codes of governance is now shifting towards mandatory obligations – even as AI regulatory approaches diverge globally. The key challenge is whether these divergent governance models – shaped by different legal traditions and regulatory philosophies – can cross-pollinate to achieve the twin objectives of innovation and societal safeguards.

From principles to prescriptions

The European Union’s AI Act is the boldest step to date, setting rules that vary by the level of risk posed by AI systems. Those deemed unacceptable risks are severely restricted, while high-risk AI systems must pass conformity assessments. The EU had hoped to set the AI Act as the next global benchmark, though its widespread adoption looks unlikely for now.

China, by contrast, has moved quickly on a narrower front, targeting issues such as pricing discrimination on e-commerce platforms, algorithmic self-preferencing and deepfakes. It has introduced rules for providers of Internet information services deploying AI, including recommendation systems and generative models – and has rapidly rolled out technical standards. This is possible because China’s AI regulations are targeted.

While both approaches are similarly prescriptive, they reflect divergent cultural nuances: the EU model prioritises individual rights, while the Chinese framework emphasises societal outcomes and pragmatic enforcement. These differences shape the pace of regulation and the public’s acceptance of AI in daily life.

For global firms, this divergence creates two increasingly distinct and costly operational landscapes where they must navigate competing compliance demands.

Balancing innovation and trust

Increasingly, policymakers are starting to view trust and innovation as equally essential to supporting AI’s role in long-term economic growth.

Japan and South Korea reflect this duality most clearly. Japan’s Act on Promotion of Research and Development (R&D), and Utilisation of AI-related Technology fosters adoption through R&D support, shared computing infrastructure and workforce development. It also addresses trust and safety concerns through guidelines for businesses and technical standards for AI developers.

South Korea’s Basic Act on AI similarly enhances competitiveness through talent development and industry development programmes for data centres, startups and small and medium-sized enterprises, while ensuring safety of high-impact AI services through mandatory transparency obligations and voluntary certification.

These more recent regulatory frameworks suggest a change in tone: governance models that enable innovation while embedding clear, enforceable guardrails – unsurprising, given that they emerged from two of Asia’s leading industrial powerhouses. Fostering domestic AI champions within a stable, trusted ecosystem could give these economies an edge.

Targeted and incremental approaches to AI regulation

A third model sidesteps comprehensive regulatory frameworks, relying instead on technical standards to promote safe and trustworthy AI, while clarifying how existing laws apply. The US, UK and Singapore follow this approach.

The US has focused on managing AI risks through technical standards such as those set by the National Institute of Standards and Technology (NIST). Meanwhile, states such as California have introduced targeted laws, for example. These require the labelling of AI-generated content and chatbots, disclosure of training data in AI systems, and transparency around risk assessments for frontier AI models.

Singapore has turned its voluntary frameworks into practical tools, including AI Verify and the Veritas toolkits for financial services. To reduce compliance costs, Singapore and the US have aligned their frameworks through a published crosswalk between AI Verify and NIST’s risk management framework.

Likewise, the UK has emphasised AI safety research through its recently renamed AI Security Institute. Singapore and the UK have also signed a memorandum of cooperation on AI safety research and risk management.

Complementing these, Singapore and the UK have moved to clarify how existing laws apply to AI systems, making amendments where necessary.

In Singapore, for example, the Personal Data Protection Commission has issued advisory guidelines detailing how the Personal Data Protection Act will be applied when personal data is used in AI systems. The Health Sciences Authority has also issued guidelines on how regulations of software medical devices will apply to AI systems. The Copyright Act was amended to clarify the conditions under which use of copyrighted materials for AI model development is permitted, such as conducting machine learning on lawfully obtained materials.

In the UK, the Information Commissioner’s Office has issued guidance on AI and data protection. The recently enacted UK Data Use and Access Act makes it easier for businesses to process personal data for automated decision-making, including those using AI. Notably, the UK Supreme Court recently drew a clear line – AI systems, it held, cannot be named as inventors in patent applications.

The next chapter of AI regulation

These different models of AI regulation reflect distinct regulatory objectives yet underscore the importance of technical standards. Until RAI technical standards are established and benchmarking tools are in place, ensuring interoperability of technical standards should remain a priority.

As policymakers increasingly recognise AI’s economic potential, they must not lose sight of its dark side – underscoring the need for clearer guidance on how existing laws apply and, where necessary, for targeted legislation to steer commercial and societal conduct.

Singapore takes a targeted, incremental approach to AI governance – protecting society, fostering competitiveness and encouraging experimentation. Guided by its “AI for the Public Good, for Singapore and the World” vision, it aims to build strong AI capabilities while promoting inclusive, responsible, and confident adoption across society.

There is no single winning formula in the global industrial AI race, and no single jurisdiction has discovered the “killer app” for regulating AI. But this may be where the practical middle ground lies: in common, interoperable standards that reduce compliance costs for any firm operating across borders. Indeed, the need for technical standards is imperative, and Singapore’s incremental, standards-driven model is a pragmatic way forward for our little red dot.

The writer is chief executive of Singapore Academy of Law

Source: The Business Times © SPH Media Limited. Permission required for reproduction.

Print
30

Latest Headlines

Singapore Academy of Law / 28 Oct 2025

ADV: JLP - Advising Clients on Deal Structures

Explore ethical issues, basic deal design structures, and factors influencing deal structure. Examine contractual and cross-border issues and understand the role of deal lawyers in managing these elements.

No content

A problem occurred while loading content.

Previous Next

Terms Of Use Privacy Statement Copyright 2025 by Singapore Academy of Law
Back To Top