← Back

The Only AI Explainer You'll Ever Need

Most confusion about “AI” comes from a single problem:

New people enter the conversation without any understanding of the history

Every AI hype cycle creates a fresh wave of people who assume AI started when they became interested in it. And because they do not know the origin, they also do not know the scope. So I'm making this so I can just send it to people who are new so I don't have to keep explaining it forever.

So here is the full grounding in one place. If you understand this, you understand AI. You do not need anything else.

Artificial Intelligence started with a universal conjecture

The term Artificial Intelligence was coined in 1955 for the Dartmouth Summer Research Project on Artificial Intelligence. The Gods of AI all got together: John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. They defined the entire concept of Artificial Intelligence in a single sentence:

The conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.

The claim is universal:
If intelligence is real, it has mechanisms.
If it has mechanisms, we can describe them.
If we can describe them, machines can run them.

Boom. Literally the second sentence, impossible to miss. Everything since 1955 is just exploring that territory. Case rested; everyone go home. Read on if you are actually interested though...

Artificial Intelligence is not a discipline, it's a moving frontier

Newcomers often think “AI” refers to a specific technology: LLMs are hot now, but it was CNN's a decade ago, LSTM's 20 years ago, Bayesian Algorithms, Monte Carlo Search, Markov Decision Solvers, etc..

But from the beginning, AI included (directly from the paper):

These are the entire spectrum of intelligence, not niche tasks.

The reason people misunderstand AI is simple: Every time we figure out a part of intelligence, we rename it and remove it from AI.

Search becomes gradient descent. Perception becomes computer vision. Language becomes NLP. "Neuron nets" become CNN, DNN, LLM, machine learning and deep learning broadly. Robotics becomes control theory and embedded systems.

What remains under the name “AI” is whatever intelligence we have not yet mastered, otherwise known as the "AI Effect": AI is whatever has not become boring engineering yet.

Cybernetics discussed current risks long before “AI” existed

If you want to understand the ethics debates, you do not need new books. You need Norbert Wiener’s The Human Use of Human Beings

Wiener mapped out everything we are worried about today before the term AI existed:

Oh and Johnny Cash did a great song about machine ethics and the intersectionality with society in 1963, derived from a 1927 ballad about a black freedman John Henry.

Does an engine get rewarded for it's steam? - Johnny Cash

Oh don't forget Vonnegut's first work "Player Piano"

Joseph Weizenbaum warned about delegating judgment to machines in Computer Power and Human Reason (1976). James Moor formalized “computer ethics” in 1985. Nick Bostrom reframed everything as existential risk in Superintelligence (2014).

"AGI" is not new either.

My masters thesis advisor Ben Goertzel formalized and popularized the term when Springer published his texbook Artificial General Intelligence in 2007, and has been running the AGI Conference for 19 years now.

Here's the history of the term laid out directly from Ben:

The brief history of the term ”Artificial General Intelligence” is as follows. In 2002, Cassio Pennachin and I were editing a book on approaches to powerful AI, with broad capabilities at the human level and beyond, and we were struggling for a title. I emailed a number of colleagues asking for suggestions. My former colleague Shane Legg came up with ”Artificial General Intelligence,” which Cassio and I liked, and adopted for the title of our edited book (Goertzel and Pennachin, 2005). The term began to spread further when it was used in the context of the AGI conference series. A few years later, someone brought to my attention that a researcher named Mark Gubrud had used the term in a 1997 article on the future of technology and associated risks (Gubrud, 1997).

Core AGI hypothesis: The creation and study of synthetic intelligences with sufficiently broad (e.g. human-level) scope and strong generalization capability, is at bottom qualitatively different from the creation and study of synthetic intelligences with significantly narrower scope and weaker generalization capability

The Dartmouth proposal encompasses this AGI proposal. It literally says “every aspect of intelligence.” We only needed the “AGI” label because, over decades, the field kept peeling off completed pieces of intelligence and turning them into independent subdisciplines. It even had a period when it was called "Strong AI." This is in contrast to "Weak" or "Narrow" AI, as defined by Kurzweil, the public facing soothsayer of AI who has been almost perfectly right about AI timing, which is honestly mind boggling.

Again, stealing from Ben's footnotes:

Kurzweil originally contrasted narrow AI with ”strong AI”, but the latter term already has a different established meaning in the AI and cognitive science literature (Searle, 1980), making this an awkward usage.

Artificial Intelligence isn't a discipline it's a way of thinking

Newcomers expect AI to have a stable curriculum but AI is not a discipline, it is a direction of research that spans:

This is why people fight about definitions. They are trying to anchor a term that does not have a defined end point. It is like arguing about what “science” really means. Unless you are Karl Popper, or a philosopher of science (which, if any philosophers of science are reading this, know that I want to be friends so please call me) you are missing the point.

And now you do not need another AI explainer ever again.

Copyright (c) 2025 Andrew Kemendo