What we talk about when we talk about AI

AI is regularly lauded by some as the answer to modern every problem. But can AI really do all of the things people want it to do — and what exactly are people actually talking about when they talk about AI? 

What is AI? 

At Careful Industries, we define AI in two ways: 

Technically, AI is a field of computer science that uses advanced methods of computing.

Socially, AI is a set of extractive tools used to concentrate power and wealth. 

Many AI literacy efforts focus on understanding how AI systems work and how to use them, but it is equally – if not more – important to have a firm grasp of the social impacts of these technologies.

These social impacts play out at macro and micro levels, with some hiding in plain sight. For instance, over the last decade, one of AI’s most significant achievements has been to turn a handful of tech billionaires into global economic and political arbiters; another is that many traditional political leaders have also become unofficial marketers for the companies run by these billionaires. Naming and recognising these impacts is an important part of understanding the nature of AI, which is not just a set of advanced computing technologies, but also a legal construct, a cultural imaginary, and — for some — a system of belief.

“AI literacy” is not just a matter of getting to grips with data and algorithms and learning how Microsoft tools actually work, it also requires understanding power, myths, and money. This blog post explores some ways those two letters have come to stand for so many different things. 

A slide showing 8 men: Elon Musk, Tim Cook, Sundar Pichai, Jeff Bezos, Jensen Huang, Mark Zuckerberg, Satya Nadella, Sam Altman

There are many reasons AI is an ambiguous and shifting set of concepts; some are due to the technical complexity of the field and the rapidly unfolding geopolitical AI arms race, but others are related to straightforward media manipulation and the fact that awe and wonder can be catching. However, a fundamental reasons AI is a confusing term is that it’s not actually the right terminology for the thing it describes.

“I had to think of a name to call it and I called it artificial intelligence”

One of the first things most AI researchers will tell you is that there is no single definition of AI. 

There are lots of reasons for this lack of clarity, including the fact that the term “artificial intelligence” was coined almost accidentally. When computer scientist John McCarthy organised the 1956 Dartmouth AI Workshop, he needed to call this burgeoning field of activity something, so he gave it a name. Interviewed in 2011, McCarthy said: 

August 31 [1952] was when I wrote the proposal for the Dartmouth summer study … I introduced the term of artificial intelligence on that date when I wrote the proposal. I had to think of a name to call it and I called it artificial intelligence. Now, later Donald Michie used the term machine intelligence and my opinion is that that was a better choice.

As such, “artificial intelligence” is not a highly specific technical label, but a name given in haste by someone writing a funding proposal. The fact that the term AI has persisted for so long and expanded to include the broader field of related computer science clearly indicates that many people find it useful, but you don’t need to get hung up on that particular pairing of terms or look for a deeper meaning to understand what it is. “Artificial intelligence” is almost like a nickname or a brand name; something understood by many to stand for something, rather than a precise description of any particular qualities. 

As wide as it is long

Fast-forwarding to the present day, the official definitions of AI tend to be complex and expansive because they are trying to capture a range of previous, current, and potential future developments. The EU AI Act for instance says: 

‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments

As definitions go, this is very broad and quite complicated, but it is certainly shorter than the OECD’s definition, which comes with an 11-page explanatory memorandum. The US National Institute of Standards and Technology (NIST) is a little bolder, describing AI as: 

A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.

Meanwhile IBM, Google, Amazon and many other tech companies offer their own definitions. Some of these describe the “human-like” capabilities of AI –  a term that administrative and government bodies shy away from – while others prefer “adaptation” and “learning”; some make frequent use of the word “system”, while others call AI a “technology”. The range of ambition set out here gestures to a number of different commercial gameplans being executed with differing degrees of confidence. 

The important thing about these definitions is that they are all different: each one is an attempt to assert some power and control over what does and does not count as AI. 

Statistics or Hollywood? 

The kind of precise definitions given above are rarely referred to when AI is mentioned as a transformational force that will “revolutionise” or “supercharge” an organisation or a field of activity. In policy announcements and at product launches, AI often takes on a more magical, mercurial form that draws on shared folk memories of science fiction rather than specific technical details. 

When UK PM Sir Keir Starmer said in January this year that, “Artificial intelligence will be unleashed across the UK to deliver a decade of national renewal,” he was using AI to add some futuristic sparkle to an announcement about a centralised management approach:

Today’s plan mainlines AI into the veins of this enterprising nation – revolutionising our public services and putting more money in people’s back pockets. Because for too long we have allowed blockers to control the public discourse and get in the way of growth in this sector.

“AI” here is a metaphor for both a magic wand that can wave itself over the economy and a multi-tool that can self-deploy to solve all of our hardest economic and bureaucratic problems. In this semi-fictional AI future-present, it is easy to infer that no one will need to deal with the boring details of getting things done because AI will do it all for us. This is AI as management sci-fi – a convenient fiction in which a new technology conveniently replaces the old digital systems, enables behavioural changes and implements infrastructural updates – and it’s familiar from the situation rooms we see in movies and TV thrillers, where the right data about the right person always pops up at the right time, and from feelgood animated movies in which a humanoid robot touchingly saves the day. 

The glamour of The Matrix, the sexiness of Samantha in Her, the calm wisdom of HAL9000 in 2001: A Space Odyssey are all invoked by these breathless descriptions which draw on the unbridled potential of our imaginations rather than on the practical, every day details of how databases operate and rule-based decisions work.  

In Artificial Unintelligence: How Computers Misunderstand the World, Meredith Broussard draws a distinction between the AI we tend to see in movies and the AI that makes its way into real life.

General AI is the Hollywood kind of AI. General AI is anything to do with sentient robots (who may or may not want to take over the world), consciousness inside computers, eternal life, or computers that “think” like humans. Narrow AI is different: it’s a mathematical method for prediction. (p. 32) 

Broussard was writing in 2019, three years before OpenAI released ChatGPT, but the distinction is still relevant. While artificial general intelligence, or “Hollywood AI”, continues to be something a few high-profile researchers compete to achieve, there’s no guarantee it will ever materialise and instead, the AI systems we have in circulation today are examples of what science-fiction writer Ted Chiang calls “applied statistics” – probability engines rather than magical friends. 

Got to have fAIth

Which brings me to the role of speculation and belief. 

Lots of the coverage of AI focuses on bold statements made by high-status male researchers. In past centuries, these people might have been religious seers or faith leaders, but these days they run AI companies. Just last week, DeepMind co-founder Demis Hassabis pulled the miracle card and suggested that using AI might mean “one day maybe we can cure all disease”, while earlier this year Anthropic founder Dario Amodei suggested that:

Possibly by 2026 or 2027 … the capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stage—a “country of geniuses in a datacenter”

These are both very big claims — the end of disease, the birth of super intelligence — but big claims are not unusual in AI. Over the last decade, many management consultancies have claimed AI will bring about a post-work world of everyday luxury, but on closer examination it is apparent that one of AI’s most significant impacts has been to create a whole new class of colonial and exploitative labour. Neither Amodei nor Hassabis’s claims are backed with any data or a a plan for realisation, even though both would amount to a significant social and economic revolution if they came to pass. These AI futures have been written in the stars rather than in detailed research or implementation plans.

AI Obscura

The narrative slippage and metaphorical vagueness that many important people use when they talk about AI means it can be very difficult to know what they mean – which in turn makes it harder to keep them accountable or to ask precise, difficult questions.

When heroic words are used to describe technologies that operate on the horizon of hope and ambition, it can feel awkward to ask practical questions such as “what are you actually proposing?” and “how will it work?”, but real knowledge requires detail and specificity rather than waves of shock and awe. AI technologies are not actually myths and should not be discussed as such; they are real technologies that use data, hardware, and human skills to achieve their social, economic, environmental, political, and technological change.

Careful Industries AI training is focussed on helping people make clear-eyed, honest decisions about AI. We work with many non-technical leaders who are curious about the technologies’ overall potential, who may have tried generative AI or know their colleagues are already using it, or been on the receiving end of complicated sales pitches, but who don’t always have the space to ask straightforward questions or get direct, honest answers. 

Puncturing the glossy, mythical hype bubbles is an important part of understanding AI and making its consequences clear. Through our training, detailed AI glossary, mapping and decision-making tools, we think it is democratically vital to make AI more real and tangible so that more people are confident to make good choices. Authoritarian power relies on mystery, but democracy and people power thrive on knowledge, transparency, and specificity. 

If that sounds interesting, I’ll leave you with a longer version of our AI definition. 

Artificial intelligence is a broad, multi-disciplinary field of computer science that makes possible a number of advanced computing functions. These functions include analysing and processing data, using rules to organise and output information, and organising and completing tasks. Artificial intelligence systems also “learn” as they undertake these tasks so they can operate without, or with minimal, human intervention. 

Advances in modern computing power mean that AI-enabled products and services can process tasks very quickly, sometimes giving them the appearance of being “magic” or “superintelligent”. However, these outputs are not always accurate, and AI may produce biased or faulty outcomes. 

AI relies on physical infrastructure – including data centres and computing power – to function. This physical infrastructure is powered by energy, cooled with water, uses rare minerals often mined in conflict zones in its construction, and can be extremely resource intensive. 

AI is both mundane and revolutionary. The same broad family of technologies can be used to do extremely mundane and trivial things and to power extraordinary breakthroughs; for instance, image recognition technologies play a part in creating animal avatars on video calls, and are also being used to revolutionise scientific understanding by supporting research activities such as “scanning optical light from billions of galaxies up to 10 billion light years away”. 

AI is not infallible. To quote the columnist Jason Aten, AI is “just computers doing computer stuff”, made up of data and rule sets and powered by computers. AI can help bring about efficiencies but also exacerbate inequalities; it can turn tedious tasks into delightful ones, but it risks baking in errors as it does so; AI can return results with extraordinary speed, but at a climate-intensive cost. AI is many things, but none of them are likely to take over the world — it is people who will do that.

Careful AI: AI 101 Courses | Training and consultancy

Related reading: Emily Tucker, Artifice and Intelligence (2022); Ali Alkhatib, Defining AI (2024); Arvind Narayanan and Sayash Kapoor, AI as Normal Technology (2025)

Next
Next

Survey: Society for Hopeful Technologists