Close

HEADLINES

Headlines published in the last 30 days are listed on SLW.

S’pore firms must step up for deepfake threats, warn experts

S’pore firms must step up for deepfake threats, warn experts

Source: Straits Times
Article Date: 13 Sep 2024
Author: Krist Boo

Businesses here are mostly underprepared amid tech's rapid advancement, they warn.

In January, a high school in the American city of Baltimore was thrown into chaos when an anonymous e-mail containing an audio recording surfaced, allegedly featuring the school principal spewing racist comments.

The clip went viral, garnering over 27,000 shares on social media and leading to the principal’s suspension amid public outrage and threats to his safety.

Police investigations revealed that the recording was a deepfake, made by the school’s athletics director in retaliation over an investigation on the use of school funds.

Weeks later, however, an audio-deepfake detection company Pindrop published its own analysis. The clip had been doctored, perhaps edited a bit, but the core of the recording was real. The company was 97 per cent sure.

This story, which made global news, shows deepfakes are getting so good that even experts clash on what is real.

These images, videos or audio, edited or made using artificial intelligence (AI) tools, could depict real or non-existent people.

After generative AI went mainstream in 2022, deepfakes became better, more accessible and scalable.

Threat actors now offer deepfake campaign services on the dark web for as little as over a hundred dollars.

While an estimated 98 per cent of deepfake content is pornographic in nature and the remaining concentrated on politics, the risks of this technology turning into corporate apocalypses are real.

And Singapore organisations, say experts, are not ready to deal with them.

Focus still on traditional threats

Deepfake risks are often seen as less imminent and immediate compared with ransomware or malware, which are better understood and often addressed by regulations and cyber-security frameworks, said Mr Righard Zwienenberg, a senior research fellow at cyber-security firm Eset.

“Deepfakes are less likely to disrupt day-to-day operations and thus corporations might not feel the same level of regulatory pressure to address them proactively,” he added.

“The perceived financial risk might seem lower.”

Because they are relatively new and unfamiliar, firms are less likely to be vigilant and less prepared, said Mr Wong Wai Meng, who chairs the Smart Technologies Action Committee at the Singapore Business Federation (SBF).

He said: “Businesses have had little exposure to the potential shape and form such attacks may present themselves.”

Still, an SBF survey of 529 small and medium-sized enterprises taken around May showed confidence in meeting cyber attacks declined from 78 per cent in 2023 to 75 per cent in 2024.

It reflects growing concerns about new attack forms, including deepfake technology, Mr Wong said.

Forms of threats and how it hurts

Since 2019, scammers have used deepfake media to augment phishing or business e-mail compromise for scams, said Mr Wayne Lim, a director at the Cyber Security Agency of Singapore (CSA).

Earlier this year, an employee at British engineering group Arup was duped into transferring over US$25 million ($32.5 million) after attending a real-time AI-generated video conference with scammers posing as his chief financial officer.

“AI technology has now made face-swop technology increasingly accessible, hence making spoofed identities easier and highly scalable, and able to bypass remote identity verification,” Mr Lim said.

Mr Vishak Raman, regional vice-president of sales at cyber-security firm Fortinet, said they often blend into real-world elements, such as fake organisations with legitimate-looking websites and physical offices, making them hard to detect.

Corporate attacks could jump easily from the boardroom into executives’ living rooms.

Mr Raman said: “It’s about, ‘okay, your son is going to this college. I know his college and I’m going to share these pictures. If you don’t want me to share them, this is the amount that you need to pay.’

“The most difficult part is emotional. How are we going to train our people against public humiliation? Not many organisations have thought about it.”

Mr Lee Joon Sern, vice-president of machine learning and cloud research at Ensign InfoSecurity, said: “A well-timed deepfake could severely damage reputations by falsely implicating executives in scandals.

“In other scenarios, deepfakes may be used to manipulate stock prices by spreading false news about a company’s financial health, all of which could have lasting consequences.”

The result is an erosion of trust, loss of business, reputational damage, legal liabilities and costly crisis control measures, he said.

Both technology and human needed in the fight

The CSA has a triple-A mnemonic drill for organisations: Assess the message, Analyse audiovisual elements, Authenticate content.

For high-risk transactions, the agency recommends additional controls such as approvals from multiple individuals and alternate channels for verification, like callbacks on to official numbers to confirm requests.

“If you have doubts on whether the person you are speaking to or e-mailing is a deepfake, ask the person a question that only a few people, including yourself, would know,” said CSA’s Mr Lim.

He added that as cyber attacks increasingly target supply chains, firms should check their vendors’ readiness.

Some companies are restricting on-premise recordings to avoid deepfake manipulations, Eset’s Mr Zwienenberg said.

For instance, several airlines now prohibit recording staff and passengers without consent.

To be more secure, he said, companies could release content through authorised channels, set up standards for recordings and invest in communication protocols to quickly debunk deepfakes, such as alerts.

“In cases of potential serious reputational damage, these counter statements should also be sent to relevant publications to prevent the spread of false information,” he added.

Mr Andrew Bud, chief executive of biometric solution firm iProov, said the old way of defending against attacks was to supply clients a piece of software that gets updated every three or six months.

iProov, which made the facial recognition software for Singpass, gets updated as many as 100 times a month.

“In the last 12 months, we have seen the quality of these face swops go from rather easy to spot to absolutely impossible for a person to spot. So any system designed to defend against them has to move a little bit faster,” he said.

Government lead needed

Mr Bud said: “The question that is being hotly debated in Europe as to whether only AI can defend against AI, or only people can defend against AI. It needs to be both.

“AI has to be a first line of defence. But when it comes to studying pattern, novel and innovative patterns of behaviour, you need skilled people to do that.”

Lawmakers likely have to lead with regulation, said Mr Mandeep Singh, global head of technology research at Bloomberg Intelligence.

He pointed to a recent AI Bill passed in California that will add guard rails and watermark AI synthetic content made from large language models.

“Cyber-security companies won’t have much of a role here as these deepfakes are generated from legit prompts and there is no hacking element,” he said.

“The focus will remain on making sure the watermarks cannot be removed from the AI videos and having provenance around how this content is consumed on social media and the broader internet.”

Mr Matthew A O’Keefe, Asia-Pacific cyber-security lead at consulting firm KPMG, said the complexity and rapid advancement of the technology warrants more collaboration between the Government, technology firms and academia.

The government involvement will help raise public awareness, fund response technologies, and support incident response and legislation, he added.

In April, it was revealed in Parliament that the police had not recorded the number of deepfake scams as the number of complaints was not high.

But the Government has staged collaborative efforts among multiple agencies.

In July 2023, it passed the Online Criminal Harms Act, which allows it to direct online platforms to prevent scam-related accounts and content, including deepfakes.

It also set up the Centre for Advanced Technologies in Online Safety, with $50 million in funding over five years, to develop tools to detect deepfakes.

A new code of practice requiring social media services to implement measures to prevent and counter the abuse of digital fakes is being worked on.

Deepfakes here to stay, the world will adapt

“By definition, all AI-generated content is fake,” said Professor Theodoros Evgeniou of Decision Sciences and Technology Management at business school Insead.

He believes that eventually, the world would adapt.

If online platforms are required to monitor harmful posts, penalties are imposed on perpetrators, and there is public engagement in policing such content, a combination of regulations and technologies will be effective, he said.

Until then, Singapore enterprises must step up.

Mr Meng Liu, senior analyst at Forrester, put it bluntly: “Singapore businesses are mostly underprepared for the upcoming threats about deepfakes.

“We predict that there will be at least one significant deepfake fraud or scam case with a large enterprise in Singapore in 2025.”

Source: Straits Times © SPH Media Limited. Permission required for reproduction.

Print
2442

Latest Headlines

Singapore Law Watch / 08 Oct 2024

ADV: Corporate Law 2nd edition book

<New> The second edition of Corporate Law provides insights into recent developments in corporate law, including updates based on the CAWG's 2019 recommendations and the 2023 amendments. Get your copy in Print and Print+Digital bundle...

No content

A problem occurred while loading content.

Previous Next

Terms Of Use Privacy Statement Copyright 2024 by Singapore Academy of Law
Back To Top