Get my newsletter delivered to your inbox.
The advent of large language models (LLMs) like ChatGPT has redefined the boundaries of what’s possible with AI.
In just over a year since their mainstream debut, these models have catapulted from novelty to linchpin, promising to reshape industries through automation, predictive analytics, and personalised experiences.
LLMs signify the dawn of the AI era in computing, following prior eras of mainframes, PCs, the internet, mobile, and cloud/big data. While AI has been in development for decades, recent advances in compute, data, and algorithms have unleashed its practical potential.
Looking ahead, LLM pioneers like OpenAI and Anthropic are poised to lead the pack. But questions remain around consolidation versus diversification, barriers to entry, and commoditisation.
The battle for dominance among the giants of AI—each wielding strategic advantages in computing power, data access, and innovation—paints a complex picture of what lies ahead.
This article explores the transformative potential of LLMs, the strategic advantages held by frontrunners like the ‘GLACOMA’, and the uncertainties that still cloud their future in an increasingly integrated world.
AI spells the new era of computing
There have been several computing eras over the years, each having a material impact on business and society. The latest is the AI era which could be the most impactful yet.
A quick reminder of each computing era.
- Mainframe era (1950 – 1960): The dominance of large, room-sized computers that were extremely expensive.
- Minicomputer era (1960 – 1970): This represented a significant reduction in size and cost compared to mainframes.
- Personal computer era (1980 – late 90s): The introduction of the desktop computer for individual use democratised computing.
- Internet era (late 90s – early 2000s): The widespread adoption of the internet created a globally interconnected phenomenon.
- Mobile era (2010s – 2020s): This decade saw smartphones become the primary computing device for a vast portion of the global population.
- Cloud and big data era (2010s to present): The shift towards cloud-based services, offering scalable and on-demand computing resources over the internet.
- Artificial intelligence era (2020s to present): Integration of AI transforming industries through automation, analytics and personalisation.
While AI and ML technologies have been in development for decades, their practical applications have surged in recent years due to improvements in computational power, data availability, and algorithmic advances.
The transformative potential of AI
It would be futile to attempt to list all the ways AI could potentially transform business and society in this article. Regardless of the industry you work in – from marketing to manufacturing – people are working on AI-related tools and services within it.
Depending on which large investment or consultancy firm you ask, AI will add anywhere from $5 trillion to $15 trillion annually to the global economy. At the same time, it could automate numerous white-collar professions which will impact the number of human jobs required.
It’s a transformative road ahead and one that will be driven by either one or multiple foundation models.
The big foundation models and the game of emperors
It appears we’re entering a period where the future means we will have either a select group of foundation models or one LLM to rule them all. The outcome of which is currently undecided.
The advantage these big foundation models have over models is their size and the fact they’re backed by tech giants with huge pools of computing power, expertise and several other strategic advantages that are crucial for success in this space. Before we get into that, let’s briefly cover the models.
Here are the GLACOMA.
- G – Grok: Developed by Elon Musk’s xAI with access to X/Twitter data.
- L – Llama: An open-source LLM by MetaAI trained on Facebook and Instagram data among other content.
- A – Anthropic/Claude: A series of LLMs founded by ex employees of OpenAI.
- C – Cohere: An enterprise LLM created for corporates with privacy and security in mind.
- O – OpenAI / Microsoft: The most well-known and as of writing the best LLM on the market.
- M – Mistral: An open-source model based out of France founded by ex Deepmind and MetaAI employees.
- A – Alphabet/Gemini: The LLM developed by Google Deepmind.
Competitive advantages of leading LLMs
The success of large language models depends on several interlocking advantages held by the current frontrunners making it almost impossible for new entrants to join the market.
- Network effects – More users means more data to improve the models through reinforcement learning and direct preference optimisation.
- Advanced training techniques – Including Reinforcement Learning from Human Feedback (RLFH) and Direct Preference Optimisation (DPO) which refine models to align with human values and language nuance.
- Investments in computational resources – Leading companies are acquiring GPUs and expanding data centres to support demanding LLM training.
- Leveraging AI scaling laws – The exponential relationship between compute, data and performance benefits firms with extensive resources.
- Strategic allocation of funds – Major players can invest in top AI talent, proprietary datasets, and infrastructure for data collection and privacy compliance.
- Technological superiority – State-of-the-art natural language capabilities, scalability, and training efficiency.
- Ethical AI practices – Bias mitigation, transparency, privacy preservation, and responsible open-source contribution.
- Partnerships and platform integration – Adapting to market demands by integrating into existing services.
- Navigating regulatory landscapes – Ensuring compliance and fostering positive social impact.
Together these interconnected elements create high barriers for new entrants, cementing the frontrunners’ leadership as we enter the AI era.
Will LLMs become commoditised?
Despite the increasing barriers to entry to join the incumbents, will we reach a point where the LLMs become commoditised due to the proliferation of the dominant models becoming widely accessible and less unique?
A major factor driving commoditisation is open-source models and the widespread availability of data and training techniques. As these elements become more accessible, the uniqueness and exclusivity of AI technologies may be reduced.
With AI capabilities becoming a common feature, LLMs might need to find innovative ways to differentiate themselves, including leveraging unique datasets, applying AI innovatively, or integrating AI with other technologies to solve complex problems.
Winner take all or shared rewards?
Investor Brad Gerstner made an excellent point about the early days of the internet. You could have been right that the internet was going to change the world and that search would be the gateway to the internet but if you invested too early you would have missed out on 98% of the gains.
In the late 1990s and early 2000s, search engines like AltaVista, Lycos, and Yahoo were pioneers, proving the concept that search would become the gateway to the internet.
However, it wasn’t until Google arrived with a superior search algorithm and a novel business model centred around search-based advertising that the full potential of search as a commercial and technological powerhouse was realised.
Investors who jumped too early into the search engine market without the foresight of Google’s emergence might have missed out on the seismic shift Google represented.
The same principle can be applied to LLMs. It’s still too early to tell which LLM – if any – will be the dominant player in the future. There are too many variables we don’t yet fully understand.
Will it be the GLACOMA LLMs? It’s very likely. Unless in the unlikely event a new LLM appears out of nowhere with some revolutionary patent-pending algorithm GLACOMA will be the dominant players in the future.
Will we live in a multi-LLM world or will the Matthew Effect ensure that only one LLM survives much like Google in search? Again, this is the unknown.
It will be better for society as a whole if people have multiple LLMs to choose from and we are not dependent on one company, one algorithm and one central point of failure for everything.
The landscape of language models is at a fascinating juncture. The remarkable advancements and widespread adoption of LLMs, as epitomised by ChatGPT and its contemporaries, have not only showcased the potential of AI to revolutionise industries but also presented unique investment opportunities.
As companies in GLACOMA at the forefront of this technological wave continue to push the boundaries, the key question remains: which of these entities will harness the most value from the AI revolution?
In this context, a ‘wait and see’ approach signifies strategic patience, complemented by diligent research and continuous evaluation of the market.
As LLMs continue advancing, key developments to track in the year ahead include:
- Emergence of new models – Will any undisclosed models backed by large firms debut to shake up the landscape?
- Open-source contributions – Level of commitment to open-source could signal commoditisation or continued concentration among leaders.
- Regulatory interventions – Governance frameworks around issues like bias, privacy, and automation’s impact on jobs.
- LLM specialisation – Potential splintering into models optimised for specific domains like medicine, law, engineering etc.
- Integration traction – Adoption by businesses and consumers will demonstrate practical value.
- Backlash – Concerns around societal risks may impact public perception and regulatory stances.
By closely monitoring these signals in 2024, we’ll gain greater clarity on whether an oligopoly consolidation around a few LLMs occurs, or a more diversified ecosystem emerges to share the rewards of the AI revolution.