Written by Scott Wilson
Winter is coming.
References from the canon of popular sci-fi and fantasy are never lost on folks who run in AI circles. So when you hear the chilling words spoken, they reach deep down into the psyche and twist your stomach into a tight nervous ball.
In the world of AI, winter may not bring Wildlings and White Walkers, but it can be just as terrifying in its own way. Funding collapses, layoffs happen, and suddenly no one wants to hear about your grand accomplishments trying to teach machines how to think.
All of this has happened before. Could it happen again? Not likely, but it’s still wise for students of AI to also be students of AI history.
The Long History of Not Quite Changing the World
Supercomputers will achieve one human brain capacity by 2010, and personal computers will do so by about 2020.
~ Ray Kurzweil, futurist
This isn’t the first time that artificial intelligence has seemed to be poised to completely alter the world. It may not be the first time people are wrong about it, either.
Artificial intelligence first became a term of art in 1955, when John McCarthy coined it in proposing the groundbreaking Dartmouth Workshop that would bring together many of the seminal thinkers in thinking machines the following summer. It didn’t take long for many of those same folks to start spinning out incredible visions of a future powered by intelligent computers.
- Herbert Simon and Allen Newell (creators of the Logic Theorist program) in 1958 predicted that “…within ten years a digital computer will be the world’s chess champion.” / Actually, it took until 1997 for IBM’s Deep Blue to best world champion Gary Kasparov in a chess match.
- Lyft president John Zimmer claimed that a majority of trips offered by Lyft would be in fully driverless vehicles by 2021. / As of 2023, humans still drive almost all Lyft vehicles
- Allan Turing believed that computers might achieve a 30 percent pass rate in his eponymous test of intelligence by 2000 / ChatGPT, which likely meets or beats that rate, was first released in 2022
- Marvin Minsky predicted in 1970 that “…in from three to eight years, we will have a machine with the general intelligence of the average human being.” / Of course, we’re still some unknown amount of time away from that particular prodigy of invention.
Markets, and even academia, react badly when a technology oversells and under-delivers. By 1974, various efforts in machine translation and speech recognition using AI were going nowhere. Calculations suggested that neural networks were inherently limited in their capabilities. This resulted in severe funding cutbacks and the collapse of many AI projects and businesses.
Despite a resurgence of interest in expert systems and narrow AI in the 1980s, something similar had happened by the end of that decade, as well. An ambitious $850 million AI project funded by the Japanese government spectacularly collapsed in 1991. After that, no one in the business or academic worlds wanted to touch AI.
When I graduated with my PhD in AI and ML in ’91, AI was literally a bad word. No company would consider hiring somebody who was in AI.
~ Usama Fayyad, Executive Director for Experiential AI at Northeastern University
Of course, we know the spring came eventually with machine learning breakthroughs leading to advances in NLP (Natural Language Processing) and computer vision.
Are We Primed for Another AI Winter Today?
2029 feels like a pivotal year. I’d be surprised if we don’t have AGI by then.
~ Elon Musk
AI is an area a lot of folks get excited about. It’s inevitable that some get a little bit too excited.
Predictions today can be just as ambitious as they were in the 60s and 70s. A 2022 survey of experts from the Neural Information Processing Systems and the International Conference on Machine Learning found that, on average, experts believe there’s about a fifty percent chance that high-level machine intelligence will emerge before 2060.
Of course, you don’t need high-level intelligence to start a revolution. Rudimentary AI can be plenty disruptive. But again, maybe not as disruptive as the predictions—and investments—suggest. A 2017 article in MarketWatch suggested that robots will take over some 90 percent of jobs in maintenance and groundskeeping in ten to twenty years. Seven years on, they have made no dent at all, however.
But a few optimistic predictions don’t make a failure.
The Costs Have to Match the Benefits for AI to Be Sustainable
Yet the ways that AI has excelled may not, in the end, be particularly economically valuable.
For starters, Large Language Models (LLMs), the most impressive application of machine learning today, seems most likely to impact jobs that don’t have a lot of economic value to society in the first place. Writers, artists, customer service reps… these are not particularly high-salary jobs. Eliminating them, in aggregate, will offer some savings… but maybe not in line with current investment expectations.
OpenAI, the hottest of the hot AI startups, was valued at $80 billion in February of 2024. That’s almost 9 times the value of the global freelance writing market.
We’ve also seen how both government and industry have slowed the roll of potential AI applications through regulation and skepticism. The decision that works created by AI can’t be copyrighted severely limits the market value of that work—it can simply be copied by anyone without consequence. And pressures from acting and writer’s groups in 2023 extracted contractual restrictions on the use of AI in the lucrative entertainment market.
In industries like law and medicine, a desire to keep a human in the loop will limit the potential for AI for decades as it builds expertise—and as patients and clients build trust.
Something similar is happening in transportation. While autonomous vehicles may already be safer than human drivers, fear of the unknown is keeping both testing and adoption on a short leash. Cruise, for example, revealed that it has 1.5 remote operators on staff per autonomous vehicle… hardly a labor-saving arrangement.
That’s the kind of balloon-popping news that leads to disappointment in the markets. And disappointment leads to pullback. Speaking in 2022, Professor Fayyad predicted a third AI winter within the next five years. And Rodney Brooks, former director of the Computer Science and Artificial Intelligence Lab and noted robotics expert, made a prediction in early 2024:
Get your thick coats now. There may be yet another AI winter, and perhaps even a full-scale tech winter, just around the corner. And it is going to be cold.
~ Rodney Brooks
The Best Way to Weather Any Storms that May Emerge
If that’s the case, and the swell of interest in AI today turns into a lot of broken hearts and empty offices, all is not lost. An education in artificial intelligence builds up a lot of core skills that can be put to good use in many related fields.
Knowing that the industry has gone through some downturns also affords the opportunity to see how AI professionals have coped with those slowdowns. After all, they didn’t all just wither up and go into hibernation. Many, in fact, kept working on machine learning projects out of the limelight and with less fanfare. Those were exactly the efforts that lead to the breakthroughs that are powering the current wave of AI achievement.
How did they make it? The same ways that you can.
Repurpose Your Skillset in Areas Proven to Hold Value
There’s a good argument to be made that the AI bust of the 1990s should get the credit for the boom in data science that emerged in the early 2000s. You can’t read through a history of those early developments of machine learning approaches to analyzing Big Data without finding the names and fingerprints of computer scientists who are now equally known for AI.
Hinton, Williams, Bengio… while their hearts were in AI, their paychecks were coming from data science in those lean years. Even Usama Fayyad made his name in data mining initially.
It’s an even easier jump to go from AI ML to data science ML now. Today, data science is often categorized as a sub-field of artificial intelligence. If AI mania starts to wane, anyone with an AI degree still has plenty of opportunities to put their skills to work in the workhorse field of data science.
Pick a Specialization Where Legitimate Advances Are Coming with AI Tools
The same is true for many other information technology specialties. AI has already proven its worth in a range of highly professional fields, ranging from medicine to biochemistry to cybersecurity.
Even if investment plummets in more speculative parts of the field, there will continue to be strong demand where performance lines up with the hype. Hackers, for example, aren’t going to stop using AI to generate polymorphic network attacks or highly tailored spearphishing emails. Radiologists are going to want image analysis that can identify carcinogenic growth early.
That means AI experts will continue to be in demand in many specialty fields even if the industry pulls back.
Go Renegade and Work on Your Own AI Project Ideas
The other thing that happens in a downturn is that people with great ideas who couldn’t get the ear of investors during the boom take to their garages and start tinkering with the next big thing.
The hype cycle in AI right now is all about generative tools and techniques. And the results are unquestionably amazing. That’s why all the money is flowing into generative model building and applications right now.
But there are a lot of approaches to artificial intelligence that aren’t built around massive data milled by handcrafted machine learning algorithms, too. In fact, there are whole areas of non-statistical computational intelligence research that are being left in the dust by generative ML approaches.
If LLMs turn out to be a stack of cards for all but a few highly specific uses in entertainment, if the hallucination problem with the technique can’t be solved, then methods that aren’t getting a hearing today may be the heroes tomorrow.
A drop in funding might also push AI engineers toward new efficiencies. In the same way that guys in garages in Silicon Valley came up with ways to build computers cheaper than the big mainframes in the ‘70s, small teams of AI researchers without access to all the big money resources may invent more efficient training methods and approaches.
Spring Follows Winter, even in the Virtual World
It’s difficult to make predictions, especially about the future.
~ Niels Bohr
Will you really need that warm coat? Or is this time different? The technology itself has dramatically improved in such a short span of time that it is hard to imagine it won’t have a big impact on business and society.
We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.
~ Roy Amara, cofounder of the Institute for the Future
Amara’s Law, as Roy Amara’s observation is now known, offers a valuable prism through which to view breakthroughs in any sort of amazing technology.
Built into that very human tendency is the sort of all or nothing run up that created previous cycles of AI hype and despair. Yet the plunge is often just as much an overreaction as the hype. And when the flowers begin to bloom, it will be the people with a strong education in AI that are first to smell the roses.
The short answer is that no one truly knows if this is another hype cycle or the long-awaited real thing. But laying in a stack of cordwood, sharpening your sword, and getting your hands on some dragon glass isn’t a bad idea no matter what.