Getting ahead of ourselves, and our crimes


Headlines published in the last 30 days are listed on SLW.

Getting ahead of ourselves, and our crimes

Getting ahead of ourselves, and our crimes

Source: Business Times
Article Date: 14 Mar 2020
Author: David Hardoon

How can we mitigate the potential risks to privacy and choice arising from AI and pre-crime tech?

Photo credit:

Advancements in artificial intelligence (AI) combined with increasing availability of behavioural data has made pre-crime a possibility. This is the ability to predict a future threat or non-compliance event. You may recall the movie Minority Report - today, pre-crime is no longer relegated to a remote possibility within science fiction but an actively pursued ambition.

What would the unlocking of such capabilities truly mean? Would it be an Orwellian social system or a true vanguard of safety? Given the broad range of possibilities, it is important for there to be a clear agenda on how such insight would be used in order to mitigate potential risks arising from pre-crime tech.

There are manifold reasons for a future threat or event of non-compliance - for example, unintentional error, deliberate intent, general non-compliance and so forth. However, the common process of their detection is primarily through investigation or audit.

Investigation is the process of observation or study by close examination and systematic inquiry. An audit (the word "audit" comes from the Latin word auditus which means "hearing", referring to the nature of audits as they were originally - an oral account of events for their review and validation) is an official systematic and independent inspection of an organisation or individual data, statements, records, and accounts for a stated purpose.

Known versus unknown

Before exploring the application of AI in more depth it is important to understand current operations. There are two scenarios we should turn our minds to. First, where an event is known to have happened (or the suspicion of the event having happened is known). Second, where an event is not known to have happened. Despite the perceived differences between the two, there is an underlying commonality. While the knowledge of the event having happened may be known or unknown prior to the commencement of an investigation or audit process - the fact is that the event (or intent for the event) must happen prior to the review or analysis taking place. If such event does not happen, no event exists and there is nothing to be detected.

We scratch the surface of two fundamental issues faced today.

Successful detection is dependent on the availability and extent of information reviewed. In practice, it is impossible to review all available information. The challenges here lie not only in the efficient review of information (or data) but also in the selection of data for such a review - that is, the data that will result in the highest likelihood of event detection.

Current process is not designed to deal with information or an event that has not occurred and/or may never happen. Where there exists information of a predicted event, what or how should such information be incorporated into the current process, given that existing operations are evidence-driven and no fault exists in relation to a predicted event, at the time of prediction.

Operationalisation of AI

There are a number of ways in which AI can be operationalised. The most common use is the application of business rules on large amounts of data; this is analogous to the digitisation of expert knowledge for processing of larger volumes of data than currently possible. This type of application may be more automation than AI. Nonetheless, it is valuable in institutionalising expert knowledge and expanded information management.

The core strength and value of AI lies in its ability to derive previously unknown insight directly from data. This is known as data-driven models which are developed to complement domain-driven models - that is, models that statistically identify the attributes of the data (for example, age, address, dependents, salary, years employed, training taken, etc) that are correlated to a type of behaviour of interest.

The behaviour can be a well-defined one, for example, a specific type of fraud, or it can be an undefined one (looking for abnormalities).

An example of the latter could be a nurse who is allowed to access patients' data but is abusing the permissions given by checking on patients outside the scope of their responsibility. AI is all about learning behavioural patterns from data and examining whether these patterns have value.

AI can go beyond providing insight to drivers of behaviour. For example, in the area of workforce productivity and effectiveness, AI solutions can aid an investigation and audit work through the ability to incorporate information such as the number of cases, complexity, predicted risk as well as constraints such as the number of available personnel and their individual skill sets, to optimally distribute workload. The power of such systems is the ability to provide a wide range of possible scenarios as well as the best possible scenario given the parameters provided.


Finally, in the application of predicting future events, pre-crime should not be perceived as the systematic erosion of one's choice. The envisaged pre-crime process, rather than the AI, would be used as an enhanced deterrent and for encouraging self-regulation - that is, the nudging of an individual not to commit an undesired event that they have been found likely to commit.

How? Imagine receiving an innocuous letter from the tax authority reminding one of the importance of correct tax calculations and submission just before one contemplates tax evasion.

There are many ways in which uncertain future knowledge can, should and is being weaved into behavioural nudges. Ironically, whereas most of the AI applications are about proving the accuracy of the models, in these circumstances, the goal is to leverage on the AI-derived insight to construct behavioural nudges to actively change the real-world outcome from that which was predicted. We do not want the crime to occur, we want to encourage all to do good with a safety net of making it significantly more onerous for those whose intentions remain undeterred by such behavioural nudges.

The writer is adviser (Artificial Intelligence) to the Corrupt Practices Investigation Bureau. The views expressed here are his own.

Source: Business Times © Singapore Press Holdings Ltd. Permission required for reproduction.


Theme picker

Latest Headlines

No content

A problem occurred while loading content.

Previous Next

Terms Of UsePrivacy StatementCopyright 2020 by Singapore Academy of Law
Back To Top