top of page

The Coming Wave

A book for tech insiders, policymakers and thinkers grappling with AI's transformative impact



Introduction

The book "The Coming Wave - AI, Power and the 21st Century's Greatest Dilemma," Is a book written by British researcher and entrepreneur Mustafa Suleyman, in collaboration with British writer Michael Bhaskar. Suleyman is a prominent figure within the AI industry, known for his contributions to advancing AI technology and his commitment to ethical principles in the field. Suleyman co-founded DeepMind in 2010, that rapidly gained recognition for its groundbreaking work in AI. This led to the extend that in 2014, Google acquired DeepMind for a reported £400 million. After the acquisition, Suleyman continued to play a key role in DeepMind at Google, leading its applied division which focuses on practical applications of AI. In 2022 he cofounded Inflection AI, a public benefit corporation developing software and hardware for machine learning and Generative AI (Gen AI). In June 2023 the company raised $1.3 billion USD from Microsoft and NVIDIA to build the, largest server-side AI super computer at that time. I recommend trying their flagship product PI. This is a ChatBot designed to be a kind and supportive companion offering text and voice conversations, friendly advice, and concise information. Now back to the book. The books stands out as a nuanced analysis of what AI means for humanity, balancing the optimism of technological potential with a sobering awareness of its risks, urging us to navigate our societal development of AI wisely. In a historical perspective, the authors look back in time at previous trends and leaps in technology and offers the metaphor of "waves" for impactful societal-changing technologies. This book is a must-read for individuals within technology, policy making or intrigued by AI's role in shaping our collective future.


As I usually do In my book reviews, I will first give quick distilled run down of my main takeaways. This to give you the main ideas from the book quickly, as well as to skip and jump to the later expanded points seeming most relevant to you. So here goes...


My main takeaways

Thinking about AI triggers the pessimism aversion trap for many

Many does not accept the reality of the situation. Suleyman points out that there is a tendency, particularly for elites, to ignore, downplay or reject narratives they see as as overly negative. It's a variant of optimism bias that is influencing the debate around the future. However we must not be overly optimistic, solutions cannot be taken for granted. A balanced outlook accepting the full complexity and ramifications of AI is essential.


AI being omni-use, capable of autonomy, availability-effect asymmetric and hyper-evolving results in a world in exponential change The Omni-use technology of smartphones has changed the lives of the masses. Smartphones help us keep fit, play games, navigate cities, take pictures, follow the lives of our friends and even fall in love. The wave of IT partly followed industry expert Gordon Moore's Law, that observed the doubling of integrated circuit transistors every two years. Mustafa Suleyman observes a similar, yet more rapid, trend for AI. He points out that the capabilities of current Gen AI, is marked by the size of its Large Language Models (LLMs). This has evolved at a staggering ten-doubling every year. The fastest growing technology in history.

Deciding to build increasingly stronger AI involves risk, deciding not to build AI also involves risk, The book's attitude towards AI is admitted by Suleyman to be contradictory. It's complexity stems from its potential to be used by both good and bad actors, for progress or catastrophic scenarios alike. Banning AI might appear as a safe risk-avoidance strategy. However, to meet our planet's grand challenges, the development of new technologies is critical, as societal stability rely in part on continual innovation.


"Little Is ultimately more valuable than Intelligence" - Mustafa Suleyman

Unstoppable incentives has through history driven waves of technology forward and AI is no exception Among the driving incentives are:

  • The immense opportunity to acquire riches. - "Little Is ultimately more valuable than Intelligence" - Mustafa Suleyman

  • National pride for being the leading nation and an arms race for acquiring power before the enemy.

  • Knowledge wanting to be free and the often largely neglected incentive of ego.


The coming wave of AI must be contained responsibly

Mustafa Suleyman underscores in his book, The world is currently not ready for the coming wave. Suleyman believes that the only right way to respond to the coming wave, is to contain it. To achieve safe containment he suggests the following 10 areas of focus:

  1. Technical Safety - Concrete technical measures to alleviate possible harms and maintain control. (Think big red "off-switches")

  2. Audits - A means of ensuring the transparency and accountability of technology (Think third party inspections of technical systems)

  3. Choke points - Levers to slow development and buy time for regulators and defensive technologies (Think ability to decrease the production of GPUs or other dependig resources for development)

  4. Resonsible Makers - Ensuring responsible developers build appropriate controls into technology from the start (Think Onboarding critics to the development teams)

  5. Incentivized Businesses - Alligning the incentives of the organizations behind technology with its containment (Think ensure that businesses gain value or minimize risks by responcibly containing AI)

  6. Governmental regulation - Supporting goverments, allowing them to build technology, regulate technology, and implement mitigation measures (Think letting govements be providers of certain societal AI services)

  7. International Alliances - Creating a system of international cooperation to harmonize laws and programs (Think UN or EU but for AI)

  8. Learning Culture - A culture of sharing learning and failures to quickly disseminate means of addressing them (Think like how aviation are known for their black boxes to ensure that we always learn from mistakes)

  9. Movements - All of this needs public input at every level, including to put pressure on each components and make it accountable (Think political movements demanding safety)

  10. Coherence - Ensuring that each element works in harmony with each other.

 

Thinking about AI triggers the pessimism aversion trap for many

Suleyman highlights a common psychological response to AI's implications: the pessimism aversion trap. He comments that this phenomenon, particularly prevalent among the elite, involves ignoring or downplaying narratives perceived as overly negative. It's a form of optimism bias that skews the debate about AI's future. People often choose to view AI through a lens of unwavering positivity, underestimating the potential challenges and overestimating the ease of finding solutions.



The Need for a Balanced Outlook

The book emphasizes the importance of adopting a balanced outlook towards AI. While it's essential to recognize AI's transformative potential, we must also be acutely aware of the risks and complexities it brings. Solutions to the challenges posed by AI are not guaranteed and should not be taken for granted. Understanding and addressing these challenges requires a nuanced and realistic approach, avoiding the trap of blind optimism.



AI being omni-use, capable of autonomy, availability-effect asymmetric and hyper-evolving results in a world in exponential change

Suleyman discusses four defining features of AI that catalyze a world in exponential change. The combination of each aspect is what makes AI's transformative power special.


Illustration showing a brain of connected notes emitting light. In the middle are written "AI". Around are enlaraged node burbles that each hold one of the features of AI: Hyper-evolving, Omni-Use, Availability-effect Asymmetric and Capable of Autonomy
The four features of AI

Omni-Use

AI's omni-use nature lies in its versatility. Like smartphones that revolutionized communication, AI will integrate into diverse sectors – healthcare, finance, education, and more. Its adaptability allows it to become a foundational tool across industries, shaping numerous facets of daily life and business operations.


Capability of Autonomy

Alan Turing, a British mathematician and computer scientist, envisioned AI's human-level autonomy capabilities in 1950. In his paper 'Computing Machinery and Intelligence,' he suggested the Turing Test to assess if AI had human cognition capabilities. The test involves an interrogator communicating with a human and an AI solely through messages; if the interrogator cannot distinguish between them, the AI passes. Today, while the Turing Test has arguably been surpassed, its relevance to AI's full potential remains limited. Thus Suleyman suggests a contemporary version: Investing $100,000 in an AI and challenge it to make it $1 million within a few months. Suleyman believes this could be possible in only three to five years. This might involve the AI autonomously designing products, negotiating distributed manufacturing and dropshipping agreements. Here human involvement would be limited to legal and ethical supervision. This test highlights AI's growing ability for intricate, self-reliant tasks with minimal human input.


Availability-Effect Asymmetry

The familiar tale of tech entrepreneurs starting from humble beginnings – in garages or university dorms – resonates with the classic heroic David-vs-Goliath narrative. These stories often highlight how small-scale beginnings can lead to significant global impacts. This is only possible because the elements used for building the innovators ideas are wide available and high scalability, creating a substantial asymmetry between the cost of innovation and its potential to effect change. AI embodies this principle, in democratizing access to expert knowledge, creativity and exexution. With AI's capabilities increasingly accessible via everyday devices, it opens new opportunities for innovation and impact, as well as potential for unprecedented rates of disruption.


Hyper-Evolving

AI's evolution is hyper-accelerated. AI's growth trajectory is exponential and faster than any previous technology. Gen AI is expanding rapidly, outpacing previous technological advancements. This rapid development indicates a future where AI's capabilities could expand beyond our current comprehension.


Conclusively, as Suleyman states, "The last wave generally reduced the cost of broadcasting information. This wave generally reduces the cost of acting on it." This profound insight encapsulates AI's role in this new era: it's not just about disseminating information but about enabling action on an unprecedented scale. AI's omni-use nature, hyper-evolving capabilities, cost-effect asymmetry, and autonomy mark a significant shift, heralding a new age of technological and societal transformation.



Deciding to build increasingly stronger AI involves risk, deciding not to build AI also involves risk

Everything of value and novelty that is created is a product of the creator's intelligence. A century ago a kilo of grain would have taken 50 times the amount of labor than today, but smart innovations have changed that. Suleyman posted on X that, A world where the potential of AI is harnessed is a world of radical abundance in food, energy, materials, transport, healthcare, and education. If AI didn't have any positive benefits there would be no hype about it in the first place. However there are risks, and these risks will be difficult to steer clear. This is what Suleyman calls the narrow path.


Picture titled the narrow path to abundance, to the right a dystopian future where current challenges aren't solved, to the left a dystopian future where challenges presented by AI havn't been managed
The narrow path to abundance - Picture generated with Dalle-E 3 through conversation and iteration with a custom GPT. Refined in Photoshop with Adobe Firefly

Risks of Developing AI

Suleyman's book presents a spectrum of risks, varying in likelihood and severity each posing serious concerns and ranging all the way to large catastrophes. Among others, The book points to China's mass surveillance systems, as a stark example of how AI can be used to infringe upon privacy and civil liberties. Additionally, the threat of AI-powered cyber-attacks poses unprecedented challenges in digital security, potentially destabilizing critical infrastructures. The prospect of automated warfare and engineered pandemics further amplifies the potential for AI to be weaponized in ways that could dramatically alter the nature of conflict and public health crises. The emergence of deepfakes and their implications for misinformation campaigns raises concerns about the erosion of trust in media and public discourse. Moreover, the risk of job automation and widespread unemployment, something Suleyman acknowledges as risk and said in an interview on the economist, that we could be facing this 20 years from now. Finally Suleyman admits that all risks in developing AI cannot possibly be foreseen. Here he provides an interesting historical perspective to how the first creators of the automobiles argued that cars would have a positive side effect of improving the environment because they would effectively outcompete horses for transportation purposes and thus, the large amounts of horse manure would disappear from public roads.


Risks of Not Developing AI

Conversely, Suleyman argues that halting AI development carries its own set of risks. He points out that societal collapses have been the norm throughout human history. but our current era of peace and stability is partially attributed to continuous economic growth. Thus, stagnating AI development could hinder this growth, potentially destabilizing societal structures that depend on continual technological and economic advancement. Furthermore AI's potential in addressing some of the world's most pressing challenges, such as climate change, healthcare, and global inequality, is significant. Therefore, the decision to halt AI innovation could mean forfeiting the opportunity to solve these critical issues. This aspect of the dilemma highlights the intricate balance between harnessing AI's transformative power for the greater good and mitigating the risks it poses to society.



Unstoppable incentives has through history driven waves of technology forward and AI is no exception

Suleyman's book delves into the driving forces behind technological waves, indicating that AI is propelled by the same unstoppable incentives, as has been seen previously in history.


Imagined picture of the first passenger railways 1830 - 1850, arguably the greates economical bubble in history - Picture generated through conversation and iteration with a custom GPT
Imagined picture of the first passenger railways 1830 - 1850, arguably the greates economical bubble in history - Picture generated through conversation and iteration with a custom GPT

The immense opportunity to aquire riches

Aquiring riches is one of the primary forces propelling the advancement of AI. Suleyman notes, "little is ultimately more valuable than intelligence". The expansion of AI’s omni-use capabilities, suggests not just an economic boost but a potential permanent acceleration in the rate of world economic growth.


National pride for being the leading nation and an arms race for acquiring power before the enemy

The race to dominate in AI also stems from national pride and the strategic advantage it brings. Suleyman points out the competitive landscape in AI development, where U.S.-based companies currently lead in commercial applications. However, China is emerging as a formidable contender, especially in academic contributions, producing the most scientific publications and educating the largest number of PhDs on the topic 4 times as many as the U.S. This competition is not just about economic gain but also about national prestige and maintaining a strategic edge in global geopolitics.


Knowledge wanting to be free and the often largely neglected incentive of ego

Another incentive driving AI forward is the human desire to push boundaries and achieve groundbreaking discoveries. Suleyman discusses the intrinsic motivation among scientists and innovators to contribute meaningful work, impress peers, and share novel insights. This drive is often accompanied by an underappreciated factor – ego. The pursuit of recognition and the desire to gain status among peers.



The coming wave of AI must be contained responsibly

As Mustafa Suleyman underscores in his book, the world is currently not ready for the coming wave. To contain the wave he has made an outline for AI containment. It is not a definitive plan but rather an outline with each alements being points for discussion, that must be answered. Each element represents a strategic approach to ensure AI's development benefits humanity while minimizing its potential harms.


Suleyman's 10 elements towards containment

  1. Technical Safety measures are the first line of defense in AI containment, involving tangible controls like "off-switches" to prevent harm and maintain control over AI systems. This involves designing AI with built-in safety features that allow for immediate deactivation or modification if needed.

  2. Audits ensure transparency and accountability in AI development. These inspections of technical systems help identify any potential ethical breaches or safety risks, maintaining the integrity of AI technologies. This could be governmental audits and interestingly potential dedicated AI built for auditing new AI system or current as they are getting better.

  3. Choke Points involves implementing strategic levers that can be adjusted at will to slow AI development, allowing regulators and defensive technologies to always be up to speed with the latest advancements. This could include controlling the production or availability of key resources like GPUs, or other fundamental dependents, vital to have available in quantaties for further AI development. Just like how acquiring radioactives isotopes are choke points for developing nuclear technology.

  4. Resonsible Makers are essential for integrating appropriate controls into AI from its inception. This involves onboarding of critics and ethicists into development teams to ensure a balanced perspective on AI’s impact. Currently cyber sequrity engineers are some of the most of the highest paid professionals. Companies can't afford to risk integrity of their IT systems, the same goes for AI systems.

  5. Incentivized Businesses are criucial for AI containment. Companies should be motivated to responsibly manage AI, ensuring that their operations do not compromise safety or ethical standards.

  6. Governmental regulation should unify with the AI developers. Authorities should be empowered to build, regulate, and implement mitigation measures, possibly providing certain AI services for societal benefit. Just like how goverments provide fundamental public services, such as infrastructure some services of AI could be the same.

  7. International Alliances and cooperation is necessary to harmonize AI laws and programs, akin to the roles of the UN or EU. A global approach ensures consistent standards and practices in AI development and use.

  8. Learning Culture fostering shared knowledge and openness about potential dangers, near-misses and failures. Similar to how the strong culture of safety shared across the aviation industry has lead to the implmentation of black boxes which data is meticulously analyzed after plane crashes to understand the root causes and prevent future accidents.

  9. Movements and public input and activism play a critical role in AI containment. Political movements and public pressure can ensure accountability and responsiveness from all stakeholders involved in AI development.

  10. Coherence among each and all of the above elements is essential. Each component must work in harmony with others, creating a unified and effective framework for AI containment.


Final remarks

If you've made it all the way here to the end, I want to thank you for reading my book review of 'The Coming Wave.' Although Mustafa Suleyman is a self-proclaimed optimist who walks the talk of improving the world through his companies, this book largely focuses on the ways AI could negatively impact humanity. This is to ultimately make a clear call to action torwards proper AI containment.


I hope this book review doesn't discourage you or my other readers from adopting and utilizing AI where beneficial. Quite on the conturaty I hopw it will inspire to participate and contribute to the inevitable coming wave in a way that is responsible and thoughtful. That is what I'm planning to do.

If you enjoyed this review, it would mean the world to me to hear from you. Please give it a like or a comment where you found it and If you would like to get more reviews of the books I've found insightful, or see my latest creations, then subscribe to my newsletter at the footer of my website 📨👇 Have a nice day! 🧠👨‍🏭👨‍💻👨‍🎨

0 comments

Recent Posts

See All
bottom of page