In a rapidly evolving landscape of AI, Australian organisations stand at a critical juncture.
The potential for significant financial gains associated with AI is evident, with some reports showing that adopting an AI portfolio can lead to over $100 million in incremental EBITDA. But the path to realizing ROI is fraught with challenges.
As many as 85% of enterprise deployments of AI fail to deliver on their promise to business. The high failure rate of AI — surpassing even the notorious difficulties of past digital transformation efforts — underscores the risks involved.
When AI deployments fail, the impact can be catastrophic. Australia exemplifies the risks posed by AI, as evidenced by the “Robodebt” scandal that became so harmful to Australians a Royal Commission convened to investigate it.
While many are excited about the possibilities offered by AI, reports show 80% of Australians are deeply concerned about the risks posed by AI and feel these risks should be considered a “global priority.”
Yet despite the risks and social hesitancy, CIOs are throwing money at AI projects — KPMG research showed more than half of Australian companies are putting 10-20% of their budget into AI.
This only increases the pressure on the CIO and IT team to ensure AI projects demonstrate value. Organisations looking for AI to become a long-term investment opportunity must overcome risk concerns. Gartner research shows that estimating and demonstrating business value is the single greatest barrier to AI projects.
Nate Suda, Gartner’s senior director analyst in Finance Technology, Value and Risk, told TechRepublic that the challenges many organisations face in articulating the value of AI include cost management, productivity benefits, and the strategic approaches necessary to ensure AI investments translate into tangible business value.
Managing costs is a primary hurdle in AI deployments. Unlike traditional search engines where expenses are minimal, generative AI incurs substantial costs due to its interactive nature.
Users often engage in multiple exchanges to refine responses, which exponentially increases costs. Each interaction, measured in tokens, adds to the expense. This cost can skyrocket if user behaviour deviates from initial assumptions.
As Suda said, “One of the biggest variables in cost is the human interaction. With generative AI, you don’t just type in your question and get a perfect answer. You might need several iterations, and you’re being charged for every word in your question and response. If your cost model assumes a single interaction and users end up having multiple, your expenses can multiply dramatically.”
To mitigate this risk, organisations are adopting a “slow scale-up” strategy. Instead of a rapid, large-scale deployment, they initially implement the planned AI deployment with a limited number of users before gradually increasing the number of users.
This iterative approach allows companies to observe the performance of ambitious AI projects and adjust based on actual usage patterns, ensuring they can model costs more accurately and avoid financial surprises.
“The best organisations are scaling up very slowly,” Suda noted. “They might start with 10 users in the first month, then 20 in the second month, and so on. This method helps them understand real usage and costs in a live environment.”
While AI promises to enhance productivity, translating these enhancements into measurable financial benefits is complex. Suda said that simply saving time, as demonstrated by tools like Microsoft Copilot, does not inherently equate to revenue generation or cost reduction.
“You need to be really clear what productivity means and how you’re harvesting that benefit into value, whether it’s revenue generation or cost reduction,” Suda said.
He also emphasised the need to distinguish between benefits and value. Benefits such as improved speed, better customer experience, and increased productivity are significant, but they only become valuable when they contribute to the bottom line.
For instance, generative AI might shorten the time required for a sequence of professional services, but unless this efficiency translates into higher revenue or reduced costs, it becomes an example of AI not delivering on its promised value.
Another crucial point that Suda noted is the risk of cost overruns due to unanticipated user behaviour. If an AI system proves highly popular and its usage exceeds expectations, the resulting costs can be astronomical. This scenario highlights the importance of meticulous planning and real-time monitoring of AI deployments to manage and predict expenses effectively.
“If users love the AI and use it extensively, your costs can go through the roof,” Suda said. “This is why understanding and modelling user behaviour is so critical.”
Gartner has developed a three-tier framework for explaining how AI can return value while balancing the associated risk. Called“Defend, Extend, and Upend,” each “level” of AI deployment offers different potential risks and benefits.
Much like with digital transformation, trying to be too ambitious with AI right from the outset is likely to result in cost overrun and a slow ROI, resulting in board and executive frustration, if not abandonment of the project.
CIOs should instead adopt a cautious, measured approach. As Suda mentioned, companies should ensure that the solutions being deployed are scalable and achieve an ROI that can be articulated from early on.