Bill to combat digitally manipulated content, deepfakes during elections tabled in Parliament
Source: Straits Times
Article Date: 10 Sep 2024
Author: Chin Soo Fang
If it is passed, candidates can ask for review of content that has misrepresented them.
A new Bill will put in place measures to counter digitally manipulated content during elections including misinformation generated using artificial intelligence (AI), commonly known as deepfakes.
The proposed safeguards under the Elections (Integrity of Online Advertising) (Amendment) Bill will apply to all online content that realistically depicts a candidate saying or doing something that he did not.
This includes content made using non-AI techniques like Photoshop, dubbing and splicing.
If the Bill is passed, candidates will be able to ask the Returning Officer (RO) to review content that has misrepresented them. A false declaration of such misrepresentation is illegal and could result in a fine or loss of a seat.
Others can also make requests to review such content, which is set to be made illegal from the time the Writ of Election is issued to the close of polling.
The move comes ahead of a general election that must be held by November 2025.
The RO can issue corrective directions to those who publish prohibited online election advertising content under the proposed new law. Social media services that fail to comply may be fined up to $1 million upon conviction, while all others may be fined up to $1,000, jailed for up to a year, or both.
Corrective actions include taking down the offending content, or disabling access by Singapore users to such content during the election period.
Minister of State for Digital Development and Information Rahayu Mahzam tabled the Bill in Parliament on Sept 9. It will be debated at the next available sitting and if passed, will amend the Parliamentary Elections Act and the Presidential Elections Act to introduce the new safeguards.
To be protected under it, prospective candidates will first have to pay their election deposits and consent to their names being published on a list that will be put up on the Elections Department’s website some time before Nomination Day.
If they choose to do so, it will be the first time that the identities of prospective candidates are made public before Nomination Day.
The measures will also cover successfully nominated candidates from the end of Nomination Day to Polling Day.
The Ministry of Digital Development and Information (MDDI) said in a press release that while the Government can already deal with individual online falsehoods against the public interest through the Protection from Online Falsehoods and Manipulation Act (Pofma), targeted levers are needed to act on deepfakes that misrepresent candidates during elections.
“Misinformation created by AI-generated content and deepfakes is a salient threat to our electoral integrity,” said an MDDI spokesperson.
“We see this new Bill not as a replacement for Pofma, but rather as a means to augment and sharpen our regulations under the online election advertising regime, to shore up the integrity of our electoral process.”
The spokesperson added that with Pofma, the Government will respond when it knows what the facts are, for example, when someone makes a falsehood about the reserves or housing prices.
“However, in the case of deepfakes featuring political candidates, it is much more difficult for the Government to establish what an individual said or did not say, did or did not do. Therefore, we do need the individual to come forward and say that this is a misrepresentation.
“While we can use a set of technological tools to assess whether the content is AI-generated or manipulated, these tools give us a certain confidence level, but it is not 100 per cent. So there is quite a lot of weight given to what an individual claims is the truth, and this is where it differs from Pofma.”
Fraudsters have disrupted elections in many countries, including in Slovakia and India. More recently, fake videos of presidential nominees Kamala Harris and Donald Trump have proliferated on social media in what is widely billed as America’s first AI election in November.
In response, there has been a growing momentum worldwide to deal with deepfakes during elections.
For example, South Korea implemented a 90-day ban on political AI-generated content before its election in April.
Its National Election Commission said it busted a total of 129 deepfakes that were deemed to violate the laws on elections of public officials between Jan 29 and Feb 16.
Brazil has also banned synthetic content that will harm or favour a candidacy during elections in February.
Closer to home, then Prime Minister Lee Hsien Loong warned the public of deepfake videos circulating online in December 2023 which showed him and then Deputy Prime Minister Lawrence Wong promoting investment platforms. The videos used AI to mimic their voices and facial expressions.
Minister for Digital Development and Information Josephine Teo told Parliament in January that Singapore needs to grow new capabilities to keep pace with scammers and online risks.
She announced a new arsenal of detection tools Singapore is developing to tackle the rising scourge of deepfakes and misinformation. The tools will be designed under a new $50 million initiative to build online trust and safety.
Beyond elections, a new code of practice will be introduced to tackle deepfakes and other forms of manipulated content.
The Infocomm Media Development Authority (IMDA) will introduce the code requiring social media services to put in place measures to address digitally manipulated content.
This will ensure that they do more to gatekeep, safeguard and moderate content on their platforms. IMDA will engage social media services in the coming months to work out the details of the code.
Bill to combat deepfakes during election timely despite challenges: Analysts
Proposed measures to combat deepfakes during elections are timely given the proliferation of such content worldwide, said analysts. But the effectiveness of such laws will depend on factors such as enforcement and public awareness, they added.
The Elections (Integrity of Online Advertising) (Amendment) Bill, tabled in Parliament on Sept 9, will prohibit the publication of digitally-manipulated content during elections. This refers to content that realistically depicts an election candidate saying or doing something that he or she did not, and includes misinformation generated using artificial intelligence (AI) – commonly known as deepfakes.
These measures will be in force from the issuance of the Writ of Election to the close of polling on Polling Day, with the Returning Officer empowered to issue corrective directions to those who publish such content.
Professor Mohan Kankanhalli, director of NUS’ AI Institute, said the problem of misinformation and disinformation requires a combination of technical solutions, regulation and legislation, and public education.
“These laws not only serve as deterrents, they also provide legal recourse post-publication. Such legislation is therefore necessary,” he said.
He added that while such laws signal a proactive stance, enforcement in other countries has been challenging.
“Detecting and proving malicious intent behind deepfakes can be difficult,” he said. “However, these capabilities are constantly improving.”
Prof Kankanhalli cited the example of the 2020 US presidential election, where deepfakes were a concern, though their direct use was limited.
One notable case involved a manipulated video of House Speaker Nancy Pelosi, which was slowed down to make her appear intoxicated or cognitively impaired. It showed how video manipulation could mislead the public, and demonstrated the potential for deepfakes to be used as a political weapon, he said.
He also cited the example of the 2019 Indian general election, when deepfakes were used by the Bharatiya Janata Party (BJP) to create manipulated videos for campaign purposes. On one occasion, the party produced videos of Delhi BJP president Manoj Tiwari, in which he appeared to speak in different dialects of Hindi and Haryanvi. The videos were designed to reach specific regional audiences more effectively without requiring him to physically record the same speech multiple times.
“Though this use of deepfake technology wasn’t meant to deceive in a malicious sense, it raised ethical concerns about the potential for such technology to mislead voters if misused,” Prof Kankanhalli said, adding that this incident also marked one of the first high-profile cases where deepfake technology was used in a political campaign.
Assistant Professor Roy Lee, from SUTD’s Information Systems Technology and Design pillar, noted that concerns have also been raised about deepfakes for the upcoming 2024 US presidential election. Manipulated videos targeting Indonesian politicians had also emerged during Indonesia’s recent election, he said.
In response to this growing problem, laws aimed at curbing deepfakes have been introduced in several countries.
For example, the US state of California passed a law in 2019 to criminalise the distribution of manipulated media such as deepfakes intended to mislead voters. Specifically, it prohibits individuals or entities from distributing such media with malice within 60 days of an election.
The European Union enacted in 2022 the Digital Services Act, which imposes stricter regulations on digital platforms, including measures to prevent the spread of manipulated content.
Prof Lee said: “These laws have been part of broader efforts to prevent election interference, although their effectiveness largely depends on timely detection and public awareness.”
Mr Benjamin Ang, head of NTU’s Centre of Excellence for National Security, noted that the US has also banned the use of AI-generated voices in robocalls, including those used in election campaigns to spread misinformation and mislead voters.
The decision came after AI-generated robocalls impersonating President Joe Biden sought to discourage voting in the New Hampshire primary election in January. Some experts noted that enforcing this law against foreign actors seeking to interfere in US elections may still be challenging, though it sends a clear message that exploiting AI to mislead voters will not be tolerated.
“The law is only one part of the battle to combat deepfakes and protect electoral fairness and integrity because this also requires vigilance and cooperation from tech platforms where the deepfakes are circulating, public education about the dangers of spreading deepfakes, and our own personal choice to stop and think very seriously before we share any videos or other content,” said Mr Ang.
He added: “The impact of this Bill, like all other laws, should be to set standards of behaviour by which our society can maintain order, resolve disputes, and protect rights.”
Dr Carol Soon, principal research fellow at Institute of Policy Studies and adjunct principal scientist at the Centre for Advanced Technologies in Online Safety (Catos), which studies deepfakes, said deepfakes also make it easier for political candidates to falsely claim genuine content to be manipulated or generated by AI, allowing them to benefit from the “liar’s dividend” in a polluted information ecosystem.
For example, during the recent Turkish election, a video that showed compromising images of an electoral candidate was said to be a deepfake when it was in fact real.
“This proposed Bill is surgical as it is focused both in terms of the defined offence and timeframe. The Bill thus seeks to strike the fine balance between upholding election integrity and allowing for non-harmful use of generative AI such as entertainment, education and creative usage,” she said.
To be protected under the proposed Bill, prospective candidates will first have to pay their election deposits and consent to their names being published on a list that will be put up on the Elections Department’s website some time before Nomination Day. If they choose to do so, it will be the first time that the identities of prospective candidates are made public before Nomination Day.
The proposed law will also cover successfully nominated candidates from the end of Nomination Day to Polling Day.
On the early disclosure of candidates’ names, Prof Lee said this primarily enhances transparency in the electoral process.
“This transparency can help mitigate the risk of misinformation and deepfake-related content as voters will have more time to scrutinise information about candidates and ensure its accuracy,” he said. “It also provides more time for online platforms and regulatory bodies to monitor and take corrective actions against manipulated content targeting these candidates.”
Source: Straits Times © SPH Media Limited. Permission required for reproduction.
1425