Close

HEADLINES

Headlines published in the last 30 days are listed on SLW.

Chatbot glitches: When disclaimers won’t save you in court – Opinion

Chatbot glitches: When disclaimers won’t save you in court – Opinion

Source: Business Times
Article Date: 04 Dec 2025

As Singapore courts signal that unverified AI output is negligent, businesses face a new era of liability.

The conversation around artificial intelligence (AI) liability has moved from theoretical debates to real-world business risks. Cases about minor glitches, like a chatbot inventing a refund policy, have escalated into a crisis of product liability and professional negligence that is hitting closer to home.

A significant shift occurred in May 2025, when a US federal court allowed a wrongful death lawsuit against an AI business to proceed. The case involves a tragedy where a chatbot allegedly contributed to a teenager’s suicide. The legal argument is a wake-up call for tech deployers: the plaintiff argues the AI is not merely a service but a commercial product that was defectively designed.

Closer to home, the Singapore High Court delivered a sharp warning in September 2025. A lawyer was personally sanctioned for submitting fake case citations generated by an AI tool. The court didn’t just call it a mistake; it labelled the failure to verify the AI’s work as “improper, unreasonable and negligent”.

For business leaders, the message is clear: AI is a commercial agent that can bind your business to contracts and expose you to lawsuits. As AI takes centre stage in customer service, the old rule of “caveat emptor” – let the buyer beware – is ending. We are shifting to “caveat venditor” – let the seller beware – where the business bears the burden of ensuring the tool is safe.

The Singapore reality: no safe harbour

This shift is particularly risky in Singapore because of our specific laws. Unlike the US, where tech platforms often rely on broad immunity laws to shield themselves from liability for third-party content, Singapore offers no such blanket immunity for generative AI (gen AI).

This is particularly imperative for businesses in Singapore as gen AI doesn’t just deliver messages; it creates them. If your corporate chatbot “hallucinates” and misrepresents or misleads a consumer, or even defames a competitor, your business could be held directly responsible as the publisher of that information. You cannot simply claim you were a passive platform.

The business risks: beyond the fine print

While businesses may try to limit or avoid liability by relying on disclaimers, these must still be viewed through the lens of the Unfair Contract Terms Act, which seeks to strike a balance between consumers and businesses as to when liability may be limited or avoided.

Risk 1: Reframing negligence

The law restricts businesses from excluding liability for damage caused by their own negligence unless the contract term is “reasonable”.

Crucially, Singapore’s courts have now signalled that relying on AI without checking its work is negligent. If it is negligent for a lawyer to trust an AI without verifying, it is arguably negligent for a bank or retailer to deploy a chatbot that makes unverified statements that mislead a consumer. A standard disclaimer in your terms saying, “We are not responsible for AI errors” is unlikely to be accepted as “reasonable” when the courts have explicitly flagged the duty to verify.

Risk 2: The expectation gap

The law also prevents a business from using fine print to deliver a service substantially different from what was reasonably expected.

If your AI chatbot acts as your customer service agent and promises a refund, the customer reasonably expects that promise to be honoured. You can’t say, “Our AI speaks for us, but we don’t have to honour what it says” in your terms. That creates a service gap that the law is unlikely to protect.

How to manage AI risk

In this new environment, claiming ignorance is not a strategy. Businesses need a proactive framework to protect themselves.

Be upfront with AI disclosures

Standard terms of service are no longer enough. Businesses must develop specific “AI disclosures” that are prominent and clear. First, businesses need to let the customers know that they are talking to an AI system, not a person. Second, the terms need to be clear and simple about the technology’s known limits, such as the possibility of hallucinations or mistakes. This establishes a baseline of informed consent. The question will then arise, which the courts will have to grapple with: Can customers trust a corporate AI bot, or are they still obligated to verify its output?

Prove “reasonableness” through action

A disclaimer’s reasonableness is based on the business’ actions, not just its words.

Help users check the facts: Show good faith by giving users the tools to check the AI’s claims. For example, program chatbots to provide direct links to the actual policies or data sources. This allows the user to verify information, thus reducing the risk of being misled.

Keep humans in the loop: Given the court’s focus on oversight, businesses should use “smart triage”. Systems should detect high-risk queries, like legal disputes, fraud or safety concerns, and automatically escalate them to a human operator. This serves as a risk control mechanism and demonstrates that you have a responsible system design.

Secure your supply chain

Most businesses often deploy third-party AI solutions instead of developing their own, which creates a critical chain of liability. While your business is liable to the consumer downstream, you must protect yourself against your AI vendor upstream.

As a result, procurement contracts with AI providers are now a critical line of defence. They must be updated to move beyond standard software agreements and include specific protections:

  • Warranties: Explicit guarantees from the vendor that the AI meets specific accuracy and safety standards, and is compliant with relevant laws.
  • Indemnities: Financial protection clauses that cover AI-specific risks like “hallucinations” or design defects.

If the system fails, the manufacturer should bear the cost, not just the business that used the system.

While AI holds immense potential, the game’s rules have undergone significant changes. New rulings in the US and Singapore have placed the responsibility squarely on business leaders. Good governance isn’t just about following the law; it is about building a resilient business that your customers can trust.

Both writers are from BR Law Corporation. P Sivakumar is director and Dillion Chua is associate director.

Source: The Business Times © SPH Media Limited. Permission required for reproduction.

Print
865

Latest Headlines

No content

A problem occurred while loading content.

Previous Next

Terms Of Use Privacy Statement Copyright 2025 by Singapore Academy of Law
Back To Top