Close

HEADLINES

Headlines published in the last 30 days are listed on SLW.

Cybersecurity alert in S’pore as Claude-maker Anthropic tests new AI model

Cybersecurity alert in S’pore as Claude-maker Anthropic tests new AI model

Source: Straits Times
Article Date: 17 Apr 2026
Author: Sarah Koh

The Cyber Security Agency of Singapore's advisory comes days after reports that Anthropic has been testing its latest AI model with about 50 companies, and that Claude Mythos reportedly found software flaws in major browsers and operating systems.

Organisations in Singapore are being urged to strengthen their cybersecurity measures, days after artificial intelligence company Anthropic began testing a frontier model that is reportedly able to compromise existing software.

Immediate mitigation measures include applying software patches for all critical and high-severity vulnerabilities, implementing multi-factor authentication across all interfaces and gateways, and reviewing user permissions to remove unnecessary access rights, the Cyber Security Agency of Singapore (CSA) said in an advisory on April 15.

“Frontier AI models can reportedly reduce the time taken to identify vulnerabilities and engineer exploits – cutting short the duration from months to hours,” said CSA.

The agency added that such models are capable of analysing billions of lines of code to identify weaknesses and conduct security analysis at speeds that outpace the time taken to carry out a manual review.

“However, the same capability could also be misused by cyberthreat actors to accelerate vulnerability exploitation and the development of malicious capabilities,” it added.

While there are no indications that such capabilities are currently being misused, it added that the advisory is meant to help organisations plan ahead to guard against such risks.

Still, companies should immediately patch critical vulnerabilities in internet-facing systems, which, if compromised, could cause widespread impact on company systems.

“These assets face the greatest exposure to automated attacks and present the highest risk of widespread impact if compromised,” said CSA.

Access to all internet-facing developments and test environments should also be strictly controlled. Otherwise, these systems should be disconnected from the internet, said the agency.

User permissions should also be reviewed to grant access rights only to those who need them for their job functions, and dormant and unused work accounts should be deleted.

The CSA advisory comes days after news broke in April that Anthropic has begun testing its latest AI model with a group of about 50 companies, instead of launching it for public use.

The Claude Mythos Preview is reportedly able to autonomously surface vulnerabilities in software systems and generate code to exploit flaws. Anthropic said the model has found vulnerabilities in every major browser and operating system.

“Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely,” Anthropic said in a statement on its website. “The fallout – for economies, public safety and national security – could be severe.”

In the longer run, CSA has also urged organisations to continuously monitor critical attack pathways such as network traffic and user behaviour, and to focus surveillance on high-risk activities on privileged accounts and access to sensitive systems.

To shorten the time needed to deploy security updates, companies are also advised to streamline approval processes and pre-test security patches in isolated environments.

“AI-powered attacks can weaponise newly disclosed vulnerabilities within hours of publication, making rapid patch deployment critical to preventing mass exploitation,” said CSA.

To quickly pick up on vulnerabilities, the authorities also called on companies to use AI tools to continuously scan and identify misconfigurations and weak credentials across their information technology infrastructure.

“Frontier AI models represent a major advancement in enhancing cybersecurity capabilities, but there are also risks involved,” said CSA.

“Organisations should take proactive steps to raise cyberhygiene standards and strengthen overall cyberdefence posture to protect themselves against risk of attacks from frontier AI models.”

A CSA spokesperson told The Straits Times that the agency is working with industry partners and technical experts to evaluate and address the capabilities of these AI tools.

“Such AI tools currently do not create fundamentally new classes of attacks – it primarily automates and accelerates existing methodologies,” said the spokesperson, adding that proper cyberhygiene and more attention can help mitigate the threats.

“However, it is recognised that such AI tools reduce the time and resources required to conduct cyberattacks.”

The agency has also alerted sector leads and critical information infrastructure owners to tighten cyberhygiene measures, and will be meeting them in the coming weeks to discuss the implications on Singapore’s cybersecurity.

In a statement to ST, the Monetary Authority of Singapore (MAS) said: “Financial institutions need to redouble efforts to strengthen their security defences, proactively identify and close vulnerabilities, and raise vigilance on cyberhygiene, including timely security patching. MAS is coordinating closely with the CSA to further strengthen support to critical infrastructure operators.”

Source: The Straits Times © SPH Media Limited. Permission required for reproduction.

Print
4

Latest Headlines

No content

A problem occurred while loading content.

Previous Next

Terms Of Use Privacy Statement Copyright 2026 by Singapore Academy of Law
Back To Top