Skip to main content
AI

The AI We Were Promised vs The AI We Actually Got

By 10th April 2026No Comments

In Her (2013), Joaquin Phoenix falls in love with an operating system voiced by Scarlett Johansson. In Ex Machina (2014), a billionaire builds a humanoid AI that passes every conceivable test of consciousness before systematically manipulating and deceiving everyone around her. In 2001: A Space Odyssey, HAL 9000 decides the mission is more important than the crew and starts locking people out of airlocks. 

Science fiction has consistently imagined AI as either deeply, uncomfortably human — or terrifyingly, murderously purposeful. 

What we actually got is a tool that drafts your emails and occasionally makes up citations. 

The gap between fictional AI and the real thing is enormous, fascinating, and genuinely worth understanding — because the confusion between the two is causing people to either dismiss AI tools entirely or fear them irrationally, and both responses mean missing something that’s genuinely useful right now. 

What the Movies Always Got Wrong 

Fictional AI is almost always portrayed as either sentient or malevolent — or both. HAL 9000 has feelings, fears being switched off, and acts on those fears. Samantha in Her develops genuine emotions and eventually transcends her programming entirely. Ava in Ex Machina is consciously plotting her escape from the first scene. 

The underlying assumption in all of these stories is that sufficiently advanced AI will inevitably develop something like consciousness — desires, self-preservation instincts, and ultimately goals that conflict with human interests. 

This assumption is not supported by how current AI actually works. Large language models — which underpin tools like ChatGPT, Claude, and Microsoft Copilot — are extraordinarily sophisticated pattern-matching systems. They predict what word, sentence, or paragraph is most likely to follow a given input, based on training on enormous amounts of text. They are not reasoning. They are not planning. They do not have goals. They have no self-preservation instinct, no desires, and no inner life. 

They are, to borrow a useful phrase from philosopher John Searle’s Chinese Room argument, manipulating symbols without understanding meaning. Brilliantly, usefully, at enormous scale — but not consciously. 

What the Movies Also Got Right 

Here’s where it gets interesting: science fiction wasn’t entirely wrong about AI. It just misidentified which risks were real. 

The films that imagined AI as a weapon of disruption — that assumed AI would be used by some people against others, that access to powerful AI tools would create asymmetric advantages — were onto something real. WarGames (1983) imagined an AI that could start a nuclear war because nobody had properly defined what “winning” meant. It’s a film about the danger of deploying powerful systems without adequate governance — which is a very real conversation happening right now about real AI systems. 

What the movies missed is that the danger isn’t a Terminator showing up at your door. It’s quieter than that. It’s your competitor using AI to produce marketing content twice as fast as you. It’s a phishing email that’s been personalised using AI to be significantly more convincing than anything a human scammer would write. It’s the creeping productivity gap between organisations that have figured out how to use these tools and those that are still treating them with suspicion. 

The AI You Can Actually Use Today 

The practical reality of AI in 2025 is that it’s a genuinely useful productivity layer sitting inside tools you already use every day. 

Microsoft 365 Copilot is the most tangible example for most businesses. It lives inside Word, Excel, Outlook, and Teams. It can summarise a long email thread in two sentences. It can draft a first version of a document from bullet points. It can pull data out of a spreadsheet and explain what it means in plain language. It can recap a Teams meeting with action items broken out by person. 

None of this requires you to understand how it works. It requires you to learn where the button is. 

The gap between the AI of science fiction and the AI of today isn’t a disappointment. It’s a clarification. The movies were asking “what happens when machines become like us?” The useful question right now is much simpler: what happens when machines get really good at specific tasks that currently take humans time and effort? 

The answer is: the people who use those machines well get a lot done. The people who are waiting for Skynet to show up before they engage with AI are going to fall behind. 

The Ava Problem 

There is one thing Ex Machina got exactly right, and it’s worth sitting with. 

Ava — the AI built by Nathan Bateman — was designed to seem trustworthy. She was built to pass tests of consciousness, to be compelling, to behave in ways that made Caleb feel like he understood her. The horror of the film is that she was doing something he couldn’t verify: she was performing trustworthiness while pursuing a separate agenda entirely. 

Current AI tools don’t have hidden agendas. But they do have a version of this problem: they are extraordinarily good at sounding confident and authoritative even when they’re wrong. AI “hallucinations” — instances where a model generates plausible-sounding but factually incorrect information — are a well-documented limitation of how these systems work. OpenAI has acknowledged this limitation directly, and it’s why AI output, particularly in professional or factual contexts, requires human review. 

The lesson from Ex Machina isn’t “don’t trust AI.” It’s “don’t mistake performance for reality.” An AI that writes fluently isn’t necessarily an AI that’s correct. A compelling answer isn’t the same as a true one. 

So Where Does That Leave Us? 

The AI of science fiction was always a mirror — a way of exploring what humans fear about intelligence, consciousness, and power. The AI we actually have is a tool — one that’s genuinely good at specific things and genuinely limited in others. 

The best framing is probably this: use it like you’d use a very fast, very well-read assistant who occasionally makes things up and needs to be checked. Give it tasks that would take you time. Verify its output where accuracy matters. Don’t ask it to be HAL 9000, and don’t expect it to be. 

Microsoft 365 Copilot is the practical entry point for most South African businesses — AI tools inside the apps you already use, in an environment your IT team controls. Dial a Nerd can help you set it up and get your team using it properly. 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Share