Last month Hunter explored how AI may impact our lives. Today I’ll ask a different question- when? How long do we have before real disruption to the status quo? Is this something for our children to deal with, or sooner? I ask because something fascinating just happened, which suggests the timeline may be much shorter than expected.
Specifically, OpenAI recently released a model named o1, featuring a new capability called chain of thought. COT can be thought of as a working memory that allows a language model to evaluate its own answers and re-prompt itself internally. The result is astonishing- at times exceeding the top end of human intelligence in advanced math and science.
By itself o1 will not change the world, but the key takeaway is that AI is becoming talented enough to help us create the next generation of AI. Technology has always worked like this, forming feedback loops where any new invention becomes part of the toolkit used to create the next. The difference here is that we’ve never had an invention that could also invent things. Technology has always been the toolkit, not the inventor.
This is why technological progress over human history has been relatively slow- it was bound by a fixed quantity of human intelligence. If we draw a simple graph of our intelligence over thousands of years, the line would go up a bit, but slowly.
We also have limited time, limited attention, and a limited number of people anywhere near that purple line. In contrast, AI can run 24/7, at blinding speeds, and in swarms of thousands or more. Anthropic CEO Dario Amodei recently analogized this as “a country of geniuses in a datacenter.” Such significant performance advantages get pretty interesting when we consider where AI currently lands on the intelligence graph.
Note that the debate over AGI and whether AI has surpassed humans isn’t necessarily useful in this context. What I’m watching is the proportion of AI design and development being done by AI. Because as human intelligence recedes from the equation, the cycles will have nothing left to slow them down. Each new model will be smarter and faster, and build more of the next. Pure acceleration.
To get a sense for just how fast this could happen, let’s zoom into the graph and plot a curve representing AI’s potentially exponential progress. Past a certain point, the green line goes straight up, meaning time essentially stops, and a dramatic leap could take place in mere seconds.
This is the context in which the advanced capability of the new o1 model is such a big deal- we may be approaching an explosive “take-off” event that defies the imagination. And if the feedback loop logic is correct, which it seems to be, this could happen as soon as the next few years. That may seem impossibly fast in linear, human-oriented time, but this will not be a human-oriented process.
Ultimately this is not a phenomena that evolution has prepared us for. For millions of years, life has always been a bit like it’s always been. That’s why we struggle to understand things we haven’t seen before. In 1903 the NY Times published an editorial named “Flying Machines Which Do Not Fly” suggesting it would take millions of years for a machine to fly. The Wright brothers did it a couple months later.
To be abundantly clear, humans are still very much leading the design and development of AI models. But it’s also true that AI is now assisting at every level of its own stack.
Even the code behind CoPilot is being optimized by AI. Listen to Microsoft CEO Satya Nadella: “The auto-encoder we use for Copilot is being optimized by o1. So think about the recursiveness of it- we are using AI to build better AI tools to build better AI. It’s a new frontier.”
Let’s also not overlook the structure of o1 and Chain of Thought, where the model is effectively communicating with itself to improve it’s ability.
The takeoff scenario has now become sufficiently realistic that Anthropic updated its risk framework to warn about models that can “independently conduct complex AI research tasks typically requiring human expertise—potentially significantly accelerating AI development in an unpredictable way.”
Unpredictable indeed. Nobody knows where this is going, but we’re about to find out.
4241 Jutland Dr., Suite 300
San Diego, CA 92117