Speed trap: Why firms jumping blindly on the AI bandwagon may hit a wall - Opinion
Source: Straits Times
Article Date: 09 Sep 2025
Author: Vikram Khanna
Seduced by the hype, many enterprises have deployed AI prematurely and got it wrong, says the author
There’s a startling disconnect in the world of artificial intelligence (AI). Tech giants such as Amazon, Google, Meta and Microsoft have invested more than US$300 billion (S$385 billion) over the last two years on building the infrastructure to power AI, such as high-end chips, data centres, cooling systems and fibre networks. For this, they – together with chip designers such as Nvidia and AMD – have been handsomely rewarded by investors.
Amid this frenzy, companies have spent more than US$30 billion on enterprise systems that offer generative AI. But a recent MIT study has found that 95 per cent of these end users have earned zero return on their investments. There are lots of pilot projects but most are not leading to deployments.
Despite being all the rage in boardrooms across the world, AI is not delivering the value for which it is being hyped. Even some of the pioneers of AI applications now acknowledge that there is much irrational exuberance. OpenAI co-founder Sam Altman, for instance, said AI is in a bubble and some people will lose “a phenomenal amount of money”, although he’s still bullish long term.
Elusive productivity gains
Productivity gains from AI – which were supposed to be one of its biggest benefits – have also not yet materialised. There is a parallel here with the introduction of personal computing in the early 1980s. The Nobel Prize-winning economist Robert Solow famously said in 1987: “I see the computer age everywhere except in the productivity statistics.” It was not until the mid-1990s that productivity showed any significant improvement.
With hindsight, we know some of the reasons for the long lag. Computer adoption needed employees, managers and organisations to learn new systems, workflows and software and unlearn legacy processes, which took time. The early technology also had limitations. For instance, the software of the 1980s was rudimentary, confined to specific tasks such as word processing and accounting. The internet and e-mail, which enabled sharing and collaboration, took off only in the 1990s.
The MIT study reveals that AI adoption suffers from similar problems. A lot of enterprise-grade AI tools are not well integrated into existing workflows. For example, if a company installs AI for invoice processing but does not redesign its workflow for electronic and automated approval of invoices, such as digital signatures, predefined rules for auto-approval, or protocols for directing questionable invoices to humans for checking, the AI cannot take over the process.
Another problem is the limitations of the technology. AI models used by enterprises typically can’t “learn and remember”. Most enterprise AI models are trained on historical data up to a certain point, and then deployed as fixed models. They don’t update themselves or “remember” new experiences after deployment unless they are retrained, which needs to be done repeatedly and in controlled ways to ensure that they remain trustworthy, dependable and safe. Maybe enterprise-wide “learning and remembering” AI models will emerge, but they are not common yet.
What a lot of employees have been doing is ditching the enterprise systems that their companies have installed and reverting to consumer applications like ChatGPT, which they have been using on their own. The MIT study gives the example of a corporate lawyer at a mid-sized firm that had spent US$50,000 on a specialised contract analysis tool. The lawyer found that it provided unsatisfactory summaries with limited options for customisation. But by using ChatGPT, she was able to guide the conversation and iterate until she got exactly what she needed.
So, “a US $20-per-month general-purpose tool often outperforms bespoke enterprise systems costing orders of magnitude more”, the study points out. The lawyer did note, though, that while ChatGPT is good for brainstorming and works well for first drafts, its lack of memory and ignorance of context makes it unsuitable for sensitive legal contracts. But its advantages over enterprise-wide systems provided by tech giants such as Microsoft, Amazon and Google yield an important insight about how to adopt AI.
Benefits of a bottom-up approach
Some researchers suggest that a bottom-up approach to AI adoption – where employees drive experimentation and discovery using consumer-grade AI tools like ChatGPT, Perplexity or Claude rather than relying on centrally imposed enterprise systems – offers several advantages.
This approach enables employees to integrate AI into their daily workflows and uncovers use cases that top-down systems might miss. It is also more accessible to lower-skilled, less technical employees. As a result, it leads to more buy-in, higher usage of AI, more relevant solutions and less waste on expensive, little-used and poorly targeted enterprise solutions. In short, according to this view, it might be better for AI adoption to be driven bottom-up by employees rather than senior executives imposing an enterprise-wide system from the top down.
On the other hand, there are some tools that can extend AI’s reach across enterprises. One of them is agentic AI, which can take autonomous actions, from executing transactions to managing workflows. Moreover, unlike most enterprise systems, AI agents have memory and learn from interactions.
But even agentic AI has limitations. It often operates as a “black box”, making decisions that are hard to audit or explain, which can be problematic for companies that need decisions to be traceable. It is also not 100 per cent error-free and sometimes, even small errors can create big financial, legal and reputational risks. It depends on high-quality, unbiased, up-to-date data, without which it will tend to make unintelligent, biased, or erroneous decisions.
AI agents also lack emotional intelligence, empathy, and an ethical compass. In sectors that need such qualities, like healthcare, education and sensitive customer service, agents risk causing dissatisfaction or harm. So, companies need to do a lot of preparatory work, set robust guard rails and be careful what applications they use AI agents for.
Getting ‘AI ready’
Like the internet, AI will evolve. But deploying it at scale, as Singapore plans to do, will involve challenges that are not just technological but also organisational. To be “AI ready”, companies have a lot of work to do in terms of choosing use cases, changing workflows, ensuring their data is accurate and complete, getting buy-in from not only senior leadership but also line managers and staff, and creating “safe zones” or sandboxes where employees can try out AI tools in a contained environment before deploying them. Premature deployment will lead to ineffective, and sometimes dangerous, outcomes.
Governments must also be careful in how they incentivise AI adoption. For instance, rather than providing subsidies for enterprise-wide AI systems at the start, they must first raise AI literacy across the workforce. They must provide grants and subsidies for the piloting of AI in real workflows, through user-friendly tools like ChatGPT, Claude and Perplexity. They must also mandate minimum standards for AI governance covering audits, data privacy, security and explainabilty.
As with many major initiatives for business transformation, so too with deploying AI, it’s better to do it right than to do it fast.
Vikram Khanna is a former associate editor of The Straits Times who writes on economic affairs.
Source: The Straits Times © SPH Media Limited. Permission required for reproduction.
2