- Emergent AI
- Posts
- What we're talking about when we talk about AI
What we're talking about when we talk about AI
Why using the right words matter.

Large Language Models, Generative AI, Machine Learning - once the domain of arXiv papers and NIPS conferences - are now front and center in our collective imagination thanks to ChatGPT’s breakout in late 2022. But what do people really mean when they say something is “AI-powered”? And does it matter if it’s one flavor of AI or another?
Yes, it does. But don’t take my word for it. Read on.
📖 Setting the stage
This is the first in a multi-part series exploring the differences between Generative AI (“GenAI”) and Machine Learning ("Everything else AI”). Throughout this short series we’ll cover things like:
What is Generative AI, and how is it different from Machine Learning?
Is Agentic AI a type of GenAI or a separate thing altogether?
Why should I care?
How can I make money with AI? (sorry)
We’ll skip the deep technical weeds - no backpropagation or tensor algebra - and keep things light. My goal is to build a shared understanding (some call it alignment1) of what we actually mean when we say AI.
GenAI and ML operate in different levels of the stack.
This leads to all the confusion.
But first, a bit about where I’m coming from which colors my perspective.
🏫 Some context
I studied Statistics in University2 . We learned about regressions and time-series, bayesian and frequentist theories, markov chains and martingales, and other stuff I struggled to understand at the time. About 5 years into my career as an Actuary3, I learned about this thing called Data Science.
In my mind, it was basically the same stuff as what I learned in school except it required a lot more data, was written in code (first R, then Python) instead of Excel, and was much easier to grok4. That was cool! I also learned it was called Machine Learning (“ML”), a name I never came across until that point, and I believe was popularized as Coursera’s first course5 .
🧠 What about AI?
At the time, AI was the realm of researchers, roboticists, philosophers, and futurists. Sure, it had to do with data, but mostly it was concerned with rules, logic, modeling the human brain and the downfall of humanity due to maximizing paperclip production. It had nothing to do with predicting customer churn, or optimizing marketing spend.
Or so I thought…
A popular representation describing the layers(?), venns(?),
categories(?) of AI
Fast forward 10 years, and with the popular introduction of ChatGPT, everything is AI.
Chatbots? AI ✅
Automated marketing outreach? AI ✅
Netflix’s recommendation engine? “Old school” AI ✅
A computer vision system? Obviously AI ✅
Autonomous Agents? You guessed it, that’s also AI ✅
So what about all the stuff I (and many of my peers) spent years studying?
The present and future belongs to AI. Which of course includes text generation, video editing, machine learning algorithms, autonomous agents, workflow automation, time series forecasting, A/B testing, supply chain optimization, and the future of baking.
🤔 Why should I care?
Most people, in their consumer context, probably don’t need to care about these differences beyond recognizing the snake-oil salesmen running about.
That said, in the business context we need to be clear about what we’re talking about. This isn’t simply an academic exercise where we debate terms for the sake of it. It’s about alignment. Which in the workplace, is the whole game.
Too many cycles are wasted because people aren’t talking about the same thing. Egos are bruised, eyes are rolled, credibility is being lost (by technical and non-technical people alike). Meetings going nowhere because we’re talking about ML when we should be talking about GenAI, and vice versa. Lots of investment in the wrong places (e.g. Agentic solutions instead of data quality) because of “quick wins”. Mostly well intentioned, but misinformed.
Much of this is education, and that starts with terminology. In our next post, we’ll unpack Generative AI and Machine Learning.
Got any thoughts to add? Let me know!
1 There’s a joke somewhere here about AI alignment and human alignment, and how I’m going for human alignment. But I’m not sure we’re ready for that this early in the series.
2 That should tip off my American friends that I grew up in Canada. University of Toronto specifically, where one of the godfathers of AI works, though I never had the pleasure of taking his courses - ML wasn’t nearly as popular 20 years ago!
3 You’ll recognize this profession from Ben Stiller’s most riveting performances.
4 Simplifying a bit, in statistics, the goal is to explain the world: you start with assumptions, test hypotheses, and interpret residuals. In machine learning, the goal is to predict: you optimize a function so the model output matches the target (the thing you’re predicting) - often without worrying about how or why it works. This leads to black boxes, and explainability challenges. ML relaxes a lot of assumptions, meaning you can just try shit.
5 After taking Andrew Ng’s Machine Learning course on Coursera, it was clear to me there were overlaps in the fields, but I needed to refresh my knowledge. So I went to get a Master’s in Statistics and ML. It was an eye opening experience to re-learn “classic stats” just 10 years later!
6 Do we still lol? My oldest skibidis and squabbles up, so I’m struggling to keep up.
Reply