Artificial intelligence (AI) has been a global buzzword since 2022 when OpenAI introduced ChatGPT, an AI-powered Chatbot that parses the internet to respond to queries in natural language. The technology sparked an AI development arms race. “Meta [Facebook’s owner], Amazon, Alphabet (Google), and Microsoft intend to spend as much as a combined $320 billion on AI technologies and data center buildouts in 2025,” media outlets reported in February. Meanwhile, Stanford’s 2025 AI Index Report estimated, “The number of newly funded generative AI startups nearly tripled … in 2024. AI has moved from the margins to become a central driver of business value.”
Mustafa Suleyman, the first CEO of Microsoft AI and co-founder of DeepMind and Inflection AI, two startups, focused on how to ensure that AI continues to benefit mankind in his book, “The Coming Wave: Technology, Power, and the 21st Century’s Greatest Dilemma.”
A “narrow path must be walked forever from here on out, and all it takes is one misstep to tumble into the abyss,” Suleyman wrote. On one side, “complete openness will push humanity off [that] narrow path.” On the other, “overreach on control is a fast track to dystopia. It, too, has to be resisted.”
That “path” is narrowing and getting harder to navigate. “Almost every foundational technology ever invented, from pickaxes to plows, pottery to photography, phones to planes and everything in between, follows a single, seemingly immutable law: it gets cheaper and easier to use, and ultimately it proliferates, far and wide.”
The dilemma
AI’s potential and risks are undeniable. “From the start, it was clear to me that AI would be a powerful tool for extraordinary good but, like most forms of power, one fraught with immense dangers and ethical dilemmas,” he wrote. “I have long worried about not just the consequences of advancing AI but where the entire technological ecosystem was heading.”
A key feature of AI is that it’s a business-changing technology. “It was clear that if we or others were successful in replicating human intelligence, this wasn’t just profitable business as usual but a seismic shift for humanity, inaugurating an era when unprecedented opportunities would be matched by unprecedented risks.”
AI’s extreme benefits and risks create the “dilemma,” as “pursuing and not pursuing new technologies is, from here, fraught with risk.”
Developing AI with no guardrails “might lead to massive invasions of privacy or ignite a misinformation apocalypse. It might be weaponized, creating a lethal suite of new cyberweapons, introducing new vulnerabilities into our networked world.”
It also will affect white-collar workers sooner rather than later, “as AI systems would replace ‘intellectual manual labor’ … long before robots replace physical labor. In the past, new jobs were created at the same time as old ones were made obsolete, but … AI could simply do most of those [old ones] as well.”
On a larger scale, AI “could present an existential threat to nation-states — risks so profound they might disrupt or even overturn the current geopolitical order. They open pathways to immense AI-empowered cyberattacks, automated wars that could devastate countries, engineered pandemics, and a world subject to unexplainable and yet seemingly omnipotent forces.”
“Some countries will react to the possibility of such catastrophic risks with a form of technologically charged authoritarianism to slow the spread of these new powers.” However, such a move has risks, as “we need the incredible benefits of the technologies of the coming wave more than ever before. [It will] address fundamental challenges, from helping unlock the next generation of clean energy to producing cheap and effective treatments for our most intractable medical conditions.” That “can and should enrich our lives, … improving living standards for billions of us.”
Additionally, “attempting to ban development of new technologies is itself a risk: technologically stagnant societies are historically unstable and prone to collapse. Eventually, they lose the capacity to solve problems, to progress.”
The trap
During a seminar Sulyman gave about AI’s risks, “dismissals came thick and fast.” Attendees argued “AI would spur new demand, which would create new jobs. It would augment and empower people to be even more productive.” And while some “conceded … maybe there were some risks, … they weren’t too bad,” stressing that “people were smart. Solutions have always been found.”
It was a similar attitude at a seminar Suleyman attended where “a respected professor with more than two decades of experience” stressed, AI is a “live risk, now.” While “attendees shuffled uneasily,” yet ultimately “no one wanted to believe this was possible,” stressing there “had to be some effective mechanisms for control, surely the … databases could be locked down, surely the hardware could be secured,” Suleyman wrote. “No one wanted to confront the implications of the hard facts and cold probabilities they’d heard.”
The crux of that denial is that AI’s real scale is often unrealized. “People seem to think it’s still far off, so futuristic and absurd-sounding that it’s just the province of a few nerds and fringe thinkers, more hyperbole, more technobabble, and boosterism. That’s a mistake. This is real, as real as the tsunami that comes out of the open blue ocean.”
To correctly manage AI, Suleyman stressed one factor stands above all else. “Without containment, every other aspect of [AI] technology, every discussion of its ethical shortcomings, or the benefits it could bring, is inconsequential,” adding, “We urgently need watertight answers for how the coming wave can be controlled and contained, how the safeguards and affordances of the democratic nation-state can be maintained, but right now no one has such a plan.”
Regulating AI
Suleyman said almost everyone sees AI containment possible if there is “deft regulation [that] balances … progress alongside sensible constraints, on national and supranational levels, spanning everything from tech giants and militaries to small university research groups and startups, tied up in a comprehensive, enforceable framework.”
They argue: “We’ve done it before … Look at cars, planes and medicines. Isn’t this how we manage and contain the coming wave?” In reality, that approach is “the classic pessimism-averse answer,” Suleyman wrote. “It’s a simple way to shrug off the problem.”
AI legislation won’t work because “governments face multiple crises independent of the coming [AI] wave – declining trust, entrenched inequality, polarized politics, to name a few.” Additionally, decision-makers rarely admit “their workforces [are] under-skilled and unprepared for the kinds of complex and fast-moving challenges that lie ahead.”
“By the time the law conversation [catches] up,” it would be obsolete, as “technology evolves week by week. Drafting and passing legislation takes years.”
Another complication of legislation is that AI is evolving everywhere simultaneously thanks to the internet. People, organizations and governments have their own views of what regulation should look like. “The price of [such] scattered insights is failure. All we’ve got [are] hundreds of distant programs across distant parts of the technosphere, chipping away at well-meaning but ad-hoc efforts without an overarching plan or direction.”
Guide to containment
The key to containing AI containment is for each nation not to rush it. “Buying time in an era of hyper-evolution is invaluable,” the book said. “Time to develop further containment strategies … additional safety measures, … to test that off switch, … build improved defensive technology, … shore up the nation-state, regulate better, or even just get that bill passed.”
Suleyman stressed those outcomes must “ensure the transparency and accountability of the technology,” technical safety to “alleviate possible harms and maintain control,” and thirdly, “ensure responsible developers build appropriate controls into [the] technology from the start.”
Another set of approaches is to “align the incentives of the organization with [AI] containment,” create a “culture of sharing learning and failures to quickly disseminate means of addressing them,” and collect and integrate “public input at every level [to] put pressure on each component and make it accountable.”
On a national level, governments need to advance their AI ecosystems by “allowing [companies] to build technology, regulate technology and implement mitigation measures” and “create a system of international cooperation to harmonize laws and programs.”
The last step is ensuring “each element works in harmony with others,” Suleyman wrote. “Containment [has to be] a virtuous circle of mutually reinforcing measures and not a gap-filled cacophony of competing programs.”
Governments and corporations must act decisively and quickly. “As the technology has progressed over the years, my concerns have grown,” Suleyman wrote. “What if the wave is actually a tsunami?” That means containing it is “not a resting place. It’s a narrow and never-ending path.”
All at once
Despite its fluid nature, some basics must be covered. “We need a clear and simple goal, a banner imperative integrating all the different efforts around technology into a coherent package,” he wrote. It is not “just taking this or that element, … this or that company or research group or even country, but everywhere, across all the fronts and risk zones and geographies at once.”
In the meantime, “governments should … be better primed for managing novel risks and technologies than ever before.” That would require a lot of money, as “national budgets for such things are generally at record levels.”
And success is not guaranteed, as “novel threats are just exponentially difficult for any government to navigate. That’s not a flaw with the idea of government; it’s an assessment of the scale of the challenge before us.”
The core of the difficulty is that “regulators regulate for things they can anticipate,” wrote Suleyman. “This, meanwhile, is an age of surprises.”
Ultimately, the key to overcoming the AI containment challenge lies in “how we can nurture sufficient legitimate political power and wisdom, adequate technical mastery, and robust norms to constrain technologies to ensure they continue to do far more good than harm.” Suleyman asked: “How, in other words, can we contain the uncontainable?”
This article first appeared in July’s print edition of Business Monthly.