S’pore studying restrictions on direct messaging, auto-play on social media to protect kids
Source: Straits Times
Article Date: 01 Apr 2026
Author: Sarah Koh
The primary concern with direct messaging is that children and teens are exposed to sexual grooming, bullying and inappropriate content.
Social media features such as direct messaging and video auto-play may soon be restricted as the Singapore authorities study ways to protect children from online harms and addiction.
This comes on the heels of social media bans for those below 16 years old in some countries, and intensifying global clampdowns on addictive social media features, specifically auto-play and infinite scroll, particularly for young users.
Noting that the Government has so far focused its efforts on curbing exposure to inappropriate content, Minister for Digital Development and Information Josephine Teo said the safety of system-level features also needs to be enhanced.
Critics argue that the endless, automatic video playback used by sites such as Instagram and YouTube encourages addictive behaviour and excessive screen time. Direct messaging features embedded in some social media platforms, such as Instagram, also expose children and teens to sexual grooming, bullying or inappropriate content from unauthorised accounts.
The Government is in talks with major social media platforms on the necessary safety enhancements, said Mrs Teo, adding that the restrictions will differ across platforms as they are designed differently.
She was speaking to the media at the newly launched Lorong AI site in one-north on March 27.
“Young users may be on the receiving end of unwanted interaction (with direct messages) – parents tell us that they are very fearful, and I completely understand,” said Mrs Teo. “In the physical world, we warn our children not to speak to strangers. But if strangers can reach children online, what is the parent to do?”
She added that the Government will consult parents and youth on their views regarding such measures before they are implemented.
Singapore’s intensified efforts to protect young users have come on the heels of landmark legal precedents.
Earlier in March, a US jury found Meta and YouTube liable for building their services in an addictive manner that caused harm to a 20-year-old plaintiff, who was awarded US$6 million (S$7.7 million) in damages. She began using Instagram at nine and YouTube at six. Her lawyer claimed that features such as auto-play and infinite scroll are designed to keep people on the apps and make services addictive.
Though an increasing number of countries, such as Australia, Indonesia and France, have moved to ban social media for young users below a certain age, Singapore has been keeping tabs on alternative ways of approaching the issue.
Mrs Teo cited Estonia, which is opposed to a ban for children despite a push by the European Union. The country is of the view that youth need to learn to use social media if they are expected to navigate the digital domain, she said.
In 2025, Estonian Minister of Justice and Digital Affairs Liisa Pakosta said it is more important to enforce the strict requirements the EU already has in place for platforms to protect youth. These include the EU’s General Data Protection Regulation, which bans social media firms from processing the personal data of children below 13.
Mrs Teo also cited new laws in New York that prevent apps such as TikTok and Instagram from showing algorithm-driven recommendations to young users. Instead, people below the age of 18 will see posts only from accounts they follow. She said that Singapore is watching how firms will make changes.
Singapore, on its part, has rolled out age assurance requirements that force app stores to gatekeep what those below 18 download from April 1. Age assurance measures refer to methods to ascertain a user’s age, either by using government-issued identity documents or by analysing facial age or online usage data.
The Government has plans to extend these age assurance requirements to social media services.
“The first place to start with is really age assurance – you need to be able to tell quite accurately the age of the user,” said Mrs Teo.
She added that platforms that have teen accounts, such as Instagram, will not be able to enforce these protections unless they are able to verify the age of users.
These added layers of safeguards will shore up existing laws that keep major social media services accountable in minimising Singapore users’ exposure to harmful content such as violence or cyberbullying, with additional protection for children.
Under the Code of Practice for Online Safety that was rolled out in 2023, Facebook, Instagram, YouTube, X, TikTok and HardwareZone are required to submit reports to the Infocomm Media Development Authority (IMDA) annually, and proactively detect and take down egregious content such as child pornography and terrorism.
The results of the second annual audit published by IMDA on March 31 found that X did not proactively detect and remove child sexual exploitation and abuse material, and TikTok did not do so for terrorism content.
Both platforms have been placed under enhanced supervision by the authorities, which requires them to provide regular status updates on the implementation of rectification measures until the issues have been resolved.
“Every time we put out something, we are very conscious of the fact that the day we put it out, it is already a little outdated,” said Mrs Teo, adding that the code will continue to be refreshed to keep up with developments in the digital domain.
Source: The Straits Times © SPH Media Limited. Permission required for reproduction.
2