AI revolution or dystopia? ↑ Back to top
Does AI really promise a dystopian future? I'm convinced that AI as a personal assistant is there for everyone to excel at what they are already good at or would like to be good at.I remember a documentary by an Australian journalist reporting on the Uighurs of Xinjiang. To gain an insight into how a middle-aged potter's world was changing, he climbed up into a loft of a earth building. The potter sat at his wheel, creating pots for sale for pittance to passing tourists, his feet treading a polished path and his hands transforming the clay with practiced patterns that seemed effortless. The room had no windows and was sooty from a fireplace in the corner. The ladder up to the loft was rickety and unsafe.
As he sat in his Chinese made T-shirt, he lamented the loss of his livelihood as his children had neither the inclination nor a future in his trade. The pompous reporter tried to lead him to blame the technological changes that were occurring in the lives of these people, with a view to incriminating the Chinese government. But the man was philosophical, completely aware that change was upon him and his children and that their future would not be making pots.
The Employment Fear: Real, But Not the Whole Story
Loss of employment is a valid concern. Historically, every technological leap — from the spinning wheel to computers — has disrupted jobs. AI will automate some tasks, especially repetitive cognitive work (e.g. data entry, scheduling, even basic writing or coding), and this will affect certain job categories. The danger is not just job loss, but job displacement without retraining or transition paths and the loss of certain skills that have evolved over centuries.
But it’s also worth noting that most fears focus on what might be lost, not what might be gained. New roles, industries, and even whole sectors often emerge alongside disruptive tech.
AI as a multiplier, not a replacement
But we can already see that the ideal model for AI is as an assistant. There is an alternate productive framing.
* Doctors + AI means faster diagnoses, fewer missed patterns.
* Teachers + AI means more personalized support for students.
* Writers, analysts, designers + AI means acceleration, more space for ideation, refinement.
This requires smart employers and thoughtful policy. Certainly, these are rarely in oversupply. If leaders treat AI purely as a way to slash headcount, the gains become short-term profits at the expense of long-term value, morale, and innovation.
What's needed to side-step dystopia?
Employers who augment human talent with AI — rather than replace it — will build more adaptive and resilient organisations. Governments and institutions need to invest seriously in helping people reskill. This is especially true in industries where automation is inevitable. AI needs to be implemented responsibly: transparency, bias-checking, and accountability should be part of any AI deployment. Roles should be re-thought - less admin, more creativity and judgment. The real potential is in giving humans more time to do the things only humans can do.
A concrete example
Let’s take healthcare as a case study - it's one of the most illustrative examples of how the “AI as assistant, not replacement” model can work. We observe the healthcare systems close to collapse in countries like UK, becoming an increasing burden in Australia and entirely unaffordable in the US.
In the current model, doctors juggle massive workloads: diagnosis, patient communication, paperwork, research, administrative tasks. Errors from fatigue, missed connections in data, or limited time are a real risk.
AI-augmented tools are now assisting, not replacing, doctors and nurses — like a superpowered assistant who works behind the scenes. AI models can scan medical images (X-rays, MRIs) for anomalies— tumors, fractures, pneumonia — faster and often more accurately than a radiologist alone. But final decisions still rest with a human professional. Faster diagnoses, fewer errors, more time for doctors to focus on complex cases.
Tools like natural language processing now help doctors auto-generate clinical notes from speech or shorthand - less time on paperwork, more time with patients. AI models like DeepMind’s AlphaFold predicted the 3D structure of proteins at a scale and speed no human lab could. This is AI accelerating research, potentially bringing life-saving drugs to market faster. AI chatbots (e.g., Babylon, Ada) handle basic triage or health questions, pointing people to urgent care or self-care as needed. Given the strain on resources that triage creates, there is a real opportunity for reduced strain on emergency services, especially in under-resourced settings.
Doctors aren’t going anywhere. But their time and cognitive energy are being reallocated. The AI revolution in medicine is a 'tool shift', not a 'personnel purge'. It’s most powerful when clinicians lead the integration of AI, not when it’s imposed as a cost-cutting tool from above. Other fields are seeing similar augmentation models - law, journalism, education, design, agriculture.
When we view AI are we lamenting an unhealthy sooty, windowless room where humans toil from dawn to dusk to fulfill tourist condescension and binding their children to a world where a world market for their skills is out of reach and a gifted servant not available to make them competitive?
Why AI resource hogging is just a temporary phenomenon. ↑ Back to top
The mainstream media is vocal in its criticism of AI along many dimensions, some painting a dystopic future, others about the impact on employment or human cognitive capacity. In this article, I want to demonstrate how the trajectory of AI means that we can look forward to a time when AI will not require any particularly extreme energy or processing consumption.I begin with an anecdote. When I first wrote programs in high school, we used punch cards and sent them to a faraway university, with the outcome of our coding known only after several weeks. At that time our school took us on an excursion to what was then SGIO - the state government insurance office - back when insurance was a state responsibility.
The SGIO tour guide ushered us into an enormous room, perhaps 50 metres by 50 metres (occupying a whole floor) where a giant computer whirred and burped, processing the records of thousands of Queensland clients. He proudly boasted of its performance and how it afforded a future where everything about everything would be accessed through databases. Now, the entire processing power of that computer can fit on a watch and the records fit on an SIM card.
It's difficult at any juncture to reliably predict the future, but I daresay we have never really, despite an obvious trajectory, believed that the developments in IT and the Internet would be so compelling as we now know they are.
So too it is with AI.
Although our understanding of neural networks is almost a half century old, the means to implement any kind of sensible, practical and therefore useful AI has been constrained. It is easy to imagine, when awed by the performance of today's AI, to believe we have 'arrived'. Mission accomplished.
But, of course, we have only just begun.
What threatens any optimism about AI, however, is the dark cloud of energy consumption and processing power. As corporate AI scrambles to build even bigger, even faster processing centres we could believe that we face a second wave of an industrial revolution which will pump an exponentially growing amount of carbon-dioxide into our atmosphere.
But such dystopic scenarios are not really the future. Just as surely as we were always, one day, going to wear a computer on our sunglasses or watch, so too, AI is destined to become light and energy efficient.
Why am I so optimistic?
How the purpose changes everything
At the moment, LLMs (large language models) want to be everything to everybody. They do not want to be constrained, because every new feature is a money-spinner. The direction is larger and larger. But, just as for evidence, context is everything, so for data, purpose is everything.
To illustrate for you how the purpose is everything, I would like to take you on a journey of awareness and perception. Let's take a digital display and experiment with its resolution. So, here we have the numerals from 0 to 9 in a 4x4 pixels display (let's ignore the fact that this is not how digital displays actually work).

For illustrative purposes, let's imagine that each pixel requires a byte of data. The storage space usage (memory or disk space) is 16 bytes. Putting aside whether this is realistic, we might consider this very efficient in terms of storage, put, clearly, the numerals rendered are fairly useless and, if reading quickly was mission critical, might lead to catastrophe.

Now, let's take the resolution to 8x8. Now, the digits are distinguishable but might be ambiguous in isolation or outside the context of other numbers. Is 0 an 0, is 3 a 3, could 8 be a robot and might 9 be an amoeba? The penalty for improving the 'recognisability' of a digit is not double, but 4 times as much.


Will 16x16 improve the readability? As we see from the 5 and 3, some of the ambiguity is reduced, but we have increased the size of the file to 256 bytes, a 1600% increase, for a gain of only 20% readability (if that). Conversely, if only 8x8 is required for readability, is the extra storage space with the increase in resolution really worth it? Bigger is not always better.

If we add colour, we require 300% increase in storage. That may be worth it if red denotes danger, as in "If the level drops to 3, the digit is displayed in red, warning the user." Maybe that penalty of 300% increase can be justified. And that's the point. The evaluation of whether we add resolution or colour is entirely dependent on the end use.
(For a fascinating investigation of fonts and legibility, see footnote 1)
So what?
Although this might seem an interesting investigation into font readability, the real lesson is that resources requirements are highly sensitive to the purpose of the technology. So, what does that mean in terms of AI?
Size matters in neural networks. Each expansion in size brings a severe penalty in terms of processing power (and therefore energy use). AI is improved in its ability to learn and 'think' about things by increasing the number of calculations that it performs. So, in the illustrative neural network represented as say 3 layers of 8 nodes with 128 connections we require about 1000 calculations per train instance. Of course, this network does very little. With a small increase in nodes by adding another layer and 4 more nodes, we get 3500 calculations.
Now, if we get 'serious' and keep 4 layers, but increase our nodes to 1000 per node, we have between 20-25 million calculations to make. An 125 fold increase in nodes cost us a 25000 increase in calculations. Now, putting aside that some GPUs are performing TFLOPS (1 000 000 000 000 calculations per second) we can see that the number of FLOPS increases quadratically to the point where, even for a single training event, calculations become extremely large. Now, add that training some LLMs requires millions to billions of passes, we are starting to realise that the resource use is both a function of complexity and likely to hit a ceiling.
If you're wondering if this means that there is some inevitable endpoint where we simply cannot push further, let me just make this simple point. How does your phone recognise your face? Another way of asking this is, "How is it that facial recognition technology has emerged so readily?" The answer is relatively straight forward. The task is simple and usually only about 10 data points are required for secure recognition. So, training a facial recognition neural network is a 'doddle'. Just don't ask it to recognise your cat, because it's totally crap at that. Now, facial recognition AI can run on a pinhead.
So, what's the point of all this?
All AI is not the same. Increasingly, models that require less resources by having a more specific purpose are shrinking the resource requirements. But there is more to this story.
Researchers have now discovered that the orthodoxy of having 16-bit nodes (65536 values) to provide extremely high resolution is subject to the same 'natural law' of scale that can be illustrated in resolution. Experiments found that halving the bit count to 8 bits (256 values) does not appreciably affect performance in many tasks. However, in terms of calculations, resource use was reducing logarithmically. Another step to 4 made a huge difference in resource use, then 2 and finally 1. [1]
This suggests that finding out how a neural network should be configured is totally sensitive to the purpose for which it will be used. The promise of 1-bit nodes in neural networks is highly productive AI that could use few watts of energy and fit in a watch or a tablet you swallow. That is, intelligent agents, quite capable of identifying patterns inside you, could be used to provide a summary of internal health, without any external or surgical intervention or reference.
So, in a sense, the era of the extremely frugal AI is already upon us.
But wait, there's more.
If AI can be reduced in resource requirements to fit easily on your laptop, maybe collective AI has an even greater potential. In an experiment to train raw recruits to identify cancer, only 15 days training was required to reach a level of 85% accuracy, about what an average doctor might achieve. Turns out, the recruits were pigeons. [2]
However, even if bio-agents for AI might not save energy or space, what this experiment revealed was that 'averaging' the results from the agents could give 99% accuracy - the level of the best diagnosis by the best human experts. The idea of collective AI is just starting to emerge.
The AI landscape is complex and efforts to make it more efficient have really only just begun, with the promise of boundless benefits across the board.
Citations:
1. This video gives technical details about how this is achieved.
1-Bit LLM: The Most Efficient LLM Possible? Link
2. Pigeon AI Link
Footnotes:
1. https://vrmrck.be/projects/legibility/ or https://medium.com/@pvermaer/down-the-font-legibility-rabbit-hole-481f207a6013