Singapore's AI ethics model needs more bite www.singaporelawwatch.sg
Close

HEADLINES

Headlines published in the last 30 days are listed on SLW.

Singapore's AI ethics model needs more bite

Singapore's AI ethics model needs more bite

Source: Straits Times
Article Date: 06 Feb 2020
Author: Irene Tham

Singapore's artificial intelligence (AI) ethics framework is a great start but needs more diverse voices to raise transparency, enhance disclosures on what exactly goes into AI algorithms and start discussions on pricing of personal data.

Singapore's award-winning model on how artificial intelligence (AI) can be ethically used received a major update recently.

It is groundbreaking in many ways. For instance, it shows exemplary use cases - a global first. But it still requires more work.

The framework was first launched in January last year at the World Economic Forum in Davos, Switzerland. The Model AI Governance Framework, as it is called, won an award at a United Nations-sponsored summit for its ability to foster socio-economic development.

While groundbreaking at that time for summing up the principles that should be adhered to - such as explainability, transparency, fairness and human-centricity - the framework lacked examples to show how these enigmatic principles could be applied in the real world.

Last month, the framework document was updated with a dozen use cases - including those from Grab, DBS Bank, HSBC and American multinational pharmaceutical firm Merck Sharp & Dohme (MSD) - to do just this.

The updated document, totalling 119 pages, also comes with a self-assessment tool that distils the upheld principles into a questionnaire checklist. It aims to be an authoritative global guide for responsible, transparent and accountable AI use.

REAL-WORLD CASES

AI seeks to simulate human traits - such as problem solving, learning, planning and predicting. The technology can also process a vast amount of information and predict outcomes faster and more accurately than humans. But it can also run amok, requiring human oversight and accountability.

Thus, the level of human involvement in AI-based decision-making was discussed at length in all use cases in the framework document.

In situations where the probability and severity of harm to humans are low, no human involvement is required. Grab's example was used to explain how this approach can be justified.

The ride-hailing firm has completely outsourced ride allocations to an AI algorithm in what is known as a "human-out-of-the-loop" approach. The algorithm takes into account drivers' preferred trip types, and where they start and end their day to reduce trip cancellations initiated by them.

Grab's justification: It is not technically feasible for a human to manage the high volume of trip allocations - over 5,000 a minute. Also, there is little or no harm done when assigned trips are cancelled.

The approach is contrasted with two opposites by DBS and MSD, for their AI efforts to counter money laundering and flag employees with the highest risk of quitting, respectively.

DBS, the largest bank in South-east Asia, has automated its money laundering detection system, but it also involves human supervisors when necessary. The system first flags suspicious transactions. Then an AI system rates the likelihood of the flagged activities to be criminal by analysing historical trends. The bank's human supervisors will need to review only cases with high risk ratings.

Human involvement satisfies both the AI framework and the Monetary Authority of Singapore's (MAS) requirement for accountable decision-making.

Similarly, MSD requires human oversight in human resource-related decisions, even though it uses an AI algorithm to predict which employee is most likely to quit.

The pharmaceutical firm recognises that managing attrition risk is sensitive. Even though its AI system analyses employee data such as work tenure and performance ratings to make predictions, the system is not given a free hand to act on the predictions.

It recognises the risk of discrimination and potential for a huge backlash if biased or inaccurate data used in the prediction leads to the unfair and wrongful treatment of employees, such as the withholding of benefits.

The harm that AI can inflict on its human creators is no longer the imagination of robot apocalypse fiction writers when one looks at recent AI episodes.

Take Microsoft's 2016 chatbot Tay, which learnt profanities in less than 24 hours from the Twitter community and started spewing offensive racist remarks. The machine has since been taken down.

Similarly in 2018, e-commerce giant Amazon shut down its AI recruitment engine after discovering it had discriminated against women.

In Amazon's case, its AI system used all the job applications it had received over a 10-year period to learn how to spot the best candidates. As in many technology firms, Amazon has a low proportion of women working in the company. The algorithm picked that up and decided that male dominance was a success factor.

Admittedly, Ms Kathy Baxter, architect of ethical AI practice at software firm Salesforce, said: "AI can be both a blessing (and) a curse."

That AI wields a double-edged sword is one of the most difficult conundrums for humans in the current digital age.

These issues have contributed to heightened discourse in AI ethics and governance over the last year.

Minister for Communications and Information S. Iswaran, in announcing the updated framework in Davos two weeks ago, said: "There are concerns about how (AI) will be used, and whether people can have trust in AI when it is used."

LACKING BITE

The framework, for all that it aims to achieve, lacks authority and bite in its current form.

First, it could be stricter on ensuring that the positive examples it cites are able to stand up to scrutiny, even if adhering to the guidelines is entirely voluntary.

Specifically, companies could be more transparent about the data that goes into their algorithms and how AI decisions are derived, especially if they have completely outsourced decision-making to AI.

Currently, the framework allows companies to provide a narrative that suits their agenda.

Take the Grab use case. The framework document does not shine any light on what other data goes into its algorithm. For instance, does Grab consider consumers' feedback on bad routes, lousy share-ride matches and driver problems?

Also, how are dynamic prices and surge fares set? Does Grab practise price discrimination based on data from users' devices or their past records in accepting higher prices? How do consumers know if they are being taken for a ride if they do not know what others are paying?

When asked, Grab declined to provide more clarity to The Straits Times.

In fact, being open about what data goes into an algorithm does not compromise one's competitive advantage. It is one thing experts have suggested companies can afford to be more open about, without spilling their secret sauce, source code or intellectual property.

Another way to be transparent without losing one's trade secrets is to reveal if an algorithm can generate consistent results across a diverse range of people, and disclose the margin of error.

Said Ms Baxter: "Today, people expect to know what's in their food or the medication they take, and companies expect to know the details of the electrical components they purchase or the ingredients in the chemicals they use.

"When AI becomes more common, businesses, governments and the society will have similar expectations."

There are exceptions, however.

For instance, DBS has declined to share how its AI system spots money laundering. It does not want to give crooks the tools to reverse engineer its process - a valid reason, and an outcome that the MAS requires.

MORE VOICES

To improve consumers' representation in the rapidly developing AI field, the Singapore AI framework could also take in more voices from academia and non-profit organisations.

To be sure, most of the 38 organisations that helped to shape the second edition of the AI framework are commercially driven. They include big tech firms Apple, Google, Facebook, IBM and Microsoft as well as banks, insurance firms, pharmaceutical firms and consultancy firms.

Including more voices from non-profit organisations and academics will offer a variety of perspectives, including those from consumers.

Noting the importance of diversity in opinion, Ms Baxter said: "Having different people coming in with different expertise, different lenses to ask questions... is really important... to identify what are the assumptions that everybody is making in a homogeneous (setting)."

DATA UP FOR GRABS

Including more voices from non-commercially driven entities can lead to more robust discussion on data ownership, which is lacking in the framework.

One idea bandied about in the academic community as a solution to the AI ethical conundrum is to allow consumers to own, and thus price and sell, their personal data.

Personal data is said to be the oil of the 21st century and is essential for companies involved in AI development.

"AI implementers will be motivated to be more accountable and transparent about the mechanics behind their algorithms to obtain the data they need from consumers," said Assistant Professor Bryan Low from the National University of Singapore's department of computer science.

Greater clarity on data ownership will also allow any wealth gained from AI use to be shared. After all, companies benefit commercially from using a massive amount of consumer data - including their health, Internet usage, facial features, voice patterns and location information - to train AI systems.

The European Union's General Data Protection Regulation - which gives EU residents the right to know what data is being kept and request for it to be deleted - provides latitude for the pricing of personal data.

If every EU resident asks for all their data to be erased, companies will need to find new ways to collect data, or even pay consumers directly for it instead of paying in kind with their free services.

The downside is those who can afford to keep their privacy will get to keep it. But those who cannot afford it will sell it. The result is a skewed data set.

TO LEGISLATE OR NOT?

There has also been discussion on whether new laws should be introduced to curb the irresponsible use of AI, rather than rely on voluntary ethical frameworks.

Governments and legislators are reluctant to pass laws at this point in time to avoid stifling the development of AI.

And while the use of data without consent has come into the spotlight following the 2018 Facebook-Cambridge Analytica scandal - where millions of people's data was used for political advertising purposes - there has not been a comparable scandal in the field of AI to attract regulatory and public ire.

"Legislation tends to be more reactive when there is no former precedence," said Prof Low.

Just as big tech firms had enjoyed unbridled growth for more than a decade before they came under tighter regulations on fake news and personal data collection and use, AI deployments will likely be put on a long leash for a while.

Even so, there is still plenty of room for companies to improve the way they account for their AI use. It starts with engaging those most affected - consumers and employees.

 

ST ILLUSTRATION: MANNY FRANCISCO

Source: Straits Times © Singapore Press Holdings Ltd. Permission required for reproduction.

Print
1516

Theme picker

Latest Headlines

No content

A problem occurred while loading content.

Previous Next
200212-0331_CorpLawDay_BB

Terms Of UsePrivacy StatementCopyright 2020 by Singapore Academy of Law
Back To Top