Start using Xero for free Sign up now

You're on our website.
Data & Insights 5 min read

Four trends that will shape AI in 2024 and beyond

Profile pic of Soon-Ee Cheah

Soon-Ee Cheah

Jan 15, 2024

LAST UPDATED: Dec 21, 2023

There is no shortage of hype surrounding artificial intelligence (AI). While the large language models that underpin these new generative AI tools have existed for a long time, AI as a technology has remained in the hands of engineers and scientists, peripheral to the public’s awareness.

Over the past 12 months, however, we crossed an important threshold: for the first time in human history, anyone with a browser and keyboard could tap into its tremendous technological power. AI-powered tools like OpenAI’s ChatGPT and image generators like Dall-E have well and truly captured the public’s attention.

We’ve only just woken up to this new world of AI, but the implications for businesses — and the economy and society at large — are hard to overstate. In 2024 and beyond, four ‘C’ dynamics will underpin the development and impact of this technology: cost, consolidation, control and creation

Trend #1: The cost of AI will need to be managed 

The next big thing for AI will be cost — or rather, how to contain the runcost of AI. AI might often seem like magic, but each command requires significant computational power to answer, much of which has not yet been passed onto the user. Right now, the usage of generative AI tools is heavily subsidised by companies like Microsoft and Google. These companies are effectively paying us to play around with the tools and are not making money from AI (yet).

According to research firm SemiAnalysis, OpenAI might be paying as much as $700,000 a day to run ChatGPT, due to the high computational power, energy, maintenance and ongoing development required to keep the service operational at scale. Analyst firm Morgan Stanley estimates that if Google provided Bard results with 50 per cent of its queries, it would add $US6 billion of incremental costs.

On top of this, 2023 set a record for investment in generative AI startups, with equity funding exceeding $14 billion across 86 deals as of the second quarter of the year. Much of this spending was allocated to building the infrastructure needed to build, train and run their AI models. In 2024, there will be pressure on companies and startups to make AI a self-funding endeavour where the economics and commercials add up. Currently, the economics of large language models are higher than the cost of labour.

Over the next 12 months, many companies will invest in distillation: a process of making large language models more efficient without significantly compromising their performance. We could see smaller, more efficient versions of generative AI tools deployed at a lower cost to a wider use base. The race is well and truly on to make AI economically viable and sustainable, and potentially profitable. 

Trend #2: The consolidation of AI could impact accessibility 

Up until now, AI has been accessible to most companies. Many companies can utilise branches of machine learning to build models from the ground up using their own data. For large language models, that’s not the case. As generative AI is such a capital intensive endeavour, you need the pockets of billion-dollar companies and access to data centres to build a large language model.

While the availability of public and popular generative AI-powered tools have meant that nearly everyone can use them , only a few companies can actually own them. This dynamic is not foreign to economies — everyone can drive a car but very few people can build them — but it could be the first time we’re seeing this play out in the knowledge domain. In 2024, a wave of consolidation in the AI landscape could impact its development and accessibility for all.  

This is likely to take the form of mergers and acquisitions, as AI startups and companies seek more capital to run expensive servers and access specialised hardware needed to run these AI tools. Already, we’ve seen Google and Amazon invest billions into Anthropic, Microsoft investing in OpenAI and becoming its exclusive provider of its computing power, and Oracle buying a stake in Cohere. 

In 2024, market consolidation will play into the pricing of AI tools as these big companies start to explore how they can recover the costs and monetise the technology. While consolidation could lead to more advanced and efficient AI solutions, fewer and larger players in the industry could increase barriers to entry for small business and startups, and effectively price them out of the market. 

Trend #3: Control will lead to questions on accountability

This year, we’ve seen governments around the world grapple with how they can control, in US President Joe Biden’s own words, the “most consequential technology of our time”. The US is now leading the way in regulating AI with its wide ranging and ambitious executive order on AI, building on the work of the European Parliament which in June 2023 passed the AI Act, which will come into effect in 2025. 

As concerns over privacy protections and the misuse of large-scale AI systems continue to rise, conversations on how best to control AI will gather momentum over the coming year. However, it is unlikely we’ll see legislation in 2024. Why? Fundamentally, the challenge of regulating AI is the same as trying to regulate a hammer: they are both just tools, one is maths and the other is metal. 

While this will play out in parliaments and courtrooms, businesses and consumers are also going to have to stare down the notion of personal accountability for the usage of these tools. Accountability will either sit in in the usage of the tools or the development of the tools. For example, in the medical field, accountability sits with the designer. If a medical device fails, the device manufacturer is on the hook. In the case of the hammer, a builder or DIY builder is liable for the misuse of the hammer. 

In the accounting sense, accountability sits with the accountant providing the service, as we’ve decided as a society that individuals shouldn’t be required to understand the nuance and are therefore unable to make reasoned judgements. The control debate will rage on in 2024, as many companies campaign to have AI regulated at the level of the consumer. Whether they succeed, remains to be seen. 

Trend #4: Copyright, creativity and commercially safe AI

There are many copyright implications surrounding large language models, and many of these questions were raised by academics, journalists and content creators in 2023. When you use ChatGPT to produce output, do you own the copyright of that output? Do large language models infringe other authors’ copyright through the use of data used to train these models?

Lawmakers are still working through these issues on outputs and inputs, and in 2024 we can expect more AI providers to differentiate themselves on data lineage. Already this year, companies like Adobe and Getty Images have launched ‘commercially safe’ AI tools that can generate images with content solely on their vast creative libraries, with full indemnification for commercial use. 

Issues of authorship and eligibility of AI-generated content for pattern protection could lead to piecemeal adoption across industries, depending on their intellectual property sensitivities. As we unpack how AI works, it will also raise questions about who builds AI and how representative they are of the values we hold in society, particularly as generative AI gets a stranglehold on public knowledge. 

Unless there is a scientific breakthrough, these large language models will continue to require human supervision and intervention over the next 12 months. To stay ahead of the curve, businesses will need to understand and adapt to these industry dynamics, to stay competitive, to innovate and to grow.

Share this article

TwitterFacebookLinkedInEmail

Related topics:

Start using Xero for free

Find out why 3.95 million subscribers locally and across the world trust Xero with their numbers.

Try Xero for free

Related articles: