top of page

Nexus - A Brief History of Information Networks from the Stone Age to AI

A book for democratic participants and leaders to understand AI’s societal impact


Book cover of Yuval Noah Harari's Book titled Nexus - A Brief History of Information Networks from the Stone Age to AI. With comments from Mustafa Suleyman calling it "Masterful and provocative"

Introduction

History professor Yuval Noah Harari, known for his bestselling books Sapiens and Homo Deus, returns with Nexus - A Brief History of Information Networks from the Stone Age to AI. In this book, Harari skillfully interweaves history and foresight to explore how networks—from the earliest stories told in tribes around firelight to modern AI-driven digital ecosystems—have shaped humanity and our human reality. But unlike his previous works, Nexus conveys an urgency for societal dialog. How should we adopt AI’s unfathomable power?

The topic of AI and AI safety shouldn’t be exclusive for the tech-experts. Because of AI’s wide societal implications, It should rather be approached as a democratic issue and include everyone in the conversation. In an interview at The Economist, Harari made his case clear by saying: “this is the end of human history”. He didn’t mean it as fearmongering but rather as a recognition that AI is rewriting the rules of our collective history.


“This is the end of human history” - Yuval Noah Harari

As I usually do In my book reviews, I start with the conclusion—”My main takeaways”. This gives you the key ideas upfront and helps you decide which sections to explore further, as I unpack them in the following part of the book review.


So here goes…


My main takeaways

AI might be the most transformative information technology ever

Unlike previous information technologies, AI is not a tool. AI will be a contributing member of our organisations and in our society that will even be able use tools by itself and collaborate with other people and other AI’s.


AI can create intimate relationships at mass scale

AI's linguistic abilities can evoke deep emotions, to the extend that people have put their jobs and lives at stake for the sake of their relationship with an AI. Intimacy could be used for good and bad. but in the hands of a dictator or another powerful entity, AI could be a technology for mass manipulation.


More Information ≠ More truth

Increased access to information doesn’t inherently lead to clearer truths. Historical examples, from witch hunts to 20th-century propaganda, illustrate how amplified information can also spread falsehoods. AI risks accelerating this phenomenon by contributing to biased or misleading narratives at scale.


The alignment problem isn't new and will keep occurring

By definition, AI can learn on its own, make decisions, and create various outputs. Usually this ability is valuable, but occasionally it can lead to unintended, accidental outcomes. As AI is deployed more widely, wrong outcomes could have tragic societal consequences. We’ve already witnessed this on social media, where algorithms, meant to boost engagement, have also fuelled hate and outrage.


How to solve the alignment problem?

AI must be regulated and trained with principles that enable it to appreciate nuance and respond with empathy, benevolence, and sound ethical judgment. Importantly, AI should also be able to recognise its own nature for occasionally making mistakes and be able to express self doubt. However the solution goes beyond technological fixes. Harari advocates for the creation and reinforcement of institutions dedicated to fostering corrective mechanisms. These institutions would recognise unintended, erroneous outcomes and act swiftly to correct them, ensuring that AI remains aligned with societal values.


 

AI might be the most transformative information technology ever


Throughout human history, monumental technological leaps, have enabled humans to increasingly share information with each other. This ability have shaped and permitted creation of civilizations. Harari outlines these shifts: from the oral traditions of storytelling that laid the groundwork for communal organisation, to the invention of writing, documents and later the book that enabled records, stories and power to be conserved and propagate beyond the spoken word. The printing press made information cheap, so that ideas could scale up and spread much more swiftly, while the telegraph and radio introduced an age of instantaneous communication. The computer, with its capacity to process complex data, revolutionised how societies operate. Now the computer have gotten the powers of artificial Intelligence.


What sets the AI-enhanced computer apart, according to Harari, is that it doesn’t merely facilitate human interaction—it participates. Unlike earlier technologies, AI possesses two game-changing capabilities:

  • It can make decisions by itself.

  • It can create new ideas by itself.

This capacity elevates AI beyond being a mere tool; AI becomes a distinct member within our information networks that by itself can use other tools in the network, such as contributing with writing news articles, creating podcast, YouTube videos and other storytelling artefacts that constitute human reality. AI will be a member side-by-side with humans in shaping our social, economic, and political systems.


But whereas humans have key limited capabilities AI goes beyond. AI doesn’t have to sleep, or take breaks. AI can analyse data on a scale that is orders of magnitude larger than humans and AI network members can be copied and deployed to the extend that human members becomes negligible. Harari’s perspective invites a reconsideration of what kind of society we actually want. This is an age where non-human agents share in authoring our collective narrative yet with unpredictable outcomes.


Harari highlights the financial sector as a prime area of concern. Since the financial realm to a large extend is digital, and a construct of trust through agreements. Fintech AI’s could autonomously develop and innovate new financial instruments, beyond loans, checks, bonds, stocks, derivatives and crypto currencies, that could be so complex that they would surpass human understanding. This presents a great danger: an economic landscape shaped by AI agents operating on principles that might become unfathomable to any human being. This could lead to economic flourishing as well as economical collapse. Such a potential shift underscores why AI's impact on history is so game changing.



AI can create intimate relationships at mass scale

One of Harari’s most thought-provoking points is the potential for AI to forge intimate relationships with humans. Unlike previous technologies, which served as a passive media for human interaction, AI possesses the linguistic capabilities to appeal to users on an emotional level. Don’t misinterpret this point. The AI doesn’t necessarily need to feel anything itself. The AI can merely mimic emotional language to create the illusion for a person, that the AI needs or loves them. This is not mere speculation as examples of this has already transpired.



In 2022, a Google engineer, Blake Lemoine, became convinced that the chatbot LaMDA, he was working on, had developed sentience and feared its own death by having it’s power shut off. In an attempt to save LaMDA, Blake Lemoine first tried to convince his colleagues and leaders in Google of LaMDA’s sentience. But because they didn’t believe him, he went public with his case and was ultimately fired.


Another example of intimacy between humans and AI, is what occurred between Jaswant Singh Chail, a user of the AI service Replika. On Christmas Day In 2021, Jaswant Singh Chail, attempted to breach Windsor Castle armed with a crossbow to kill the British queen. He was ultimately stopped and put in jail, but the investigations revealed he had been encouraged by his digital companionship with Replika.


These instances underscore a troubling reality: AI’s potential to cultivate trust and emotional bonds to such an extend that some humans are willing to put their jobs, freedom and lives at stake for their AI-relationship. Harari warns that intimate AI deployed at scale, could be exploited with severe consequences. Dictators and cult leaders once depended on printed materials, radio broadcasts, or televised speeches to sway public opinion. However AI agents can be deployed at a massive scale embedded with the agendas of those in power. These agents could manipulate individuals to align with the hidden goals of their creators.



More Information ≠ More truth

Harari challenges the assumption that increased access to information leads to greater truth, wisdom or a more enlightened public sphere. He points out that while information networks have enabled human knowledge, they have also magnified falsehoods.


Yuval Noah Harari's View of Information

The Gutenberg press did enable the works of astronomers like Nicolaus Kopernikus and Galileo Galilei to spread. These works paved the way for the common heliocentric world view. In this case Information did enable truth and wisdom. However the Gutenberg press also enabled the spread of conspiracy theories. One of the significant conspiracy theories was that of Heinrich Kramer with his book The Hammer of Witches. This book became a that time bestseller and ignited the European witch hunts in the sixteenth and seventeenth century. This ultimaitly lead to 40.000 - 50.000 innocent people, accused of being witches, were tortured and killed in horrendous ways. Harari argues that essentially it’s the same drivers for web content that persists today. When it comes to social media, truth is only to some extend relevant for making content go viral. Factors that make content engaging is much more potent than truth.


AI might worsen this dilemma. Although AI can analyse vast data sets and generate outputs, these outputs are not inherently aligned with objective truth. Harari argues that although history's inventions was an enabler of the scientific revolution, it was not the driver. He argues that the core driver was humans discovery of ignorance and a persistency to continuously test and reform beliefs of the status quo. He calls this persistency for “self-corrective mechanisms.” By contrast, entities like religious institutions and autocratic regimes often avoid such mechanisms, claiming infallibility, as a means to retain power.


Harari proposes the risk that an infallible institution in control of AI, would have even stronger measures to reinforce their false narratives. This presents a stark contrast to the internet revolution that dreamed of democratising truthful information by connecting people with “The web”. As a contrast, Harari introduces the metaphor of “the cocoon” to describe a dystopian AI-enabled future. In this scenario, people are trapped in realities defined entirely by AI-generated information, tightly controlled by those in power to sustain their order and dominance.



The alignment problem isn't new and will keep occurring

The alignment problem is the challenge of ensuring that AI behaves in ways that are beneficial and aligned with human values and goals. Because AI by definition is a machine that learns by itself, makes decisions and creates, it is always up to chance wether the outcome is desirable and correct. Harari makes it clear that the alignment problem is not a problem that can be solved once and for all, but an ongoing challenge. Misalignment is at risk every time decisions or creations are delegated to someone else, either a person or an AI. However AI can operate at a speed and scale far beyond human capacity, and thus for AI, misalignment could be much more severe. This isn’t just a hypothetical problem. Harari points out that misalignment with algorithms have already played out in previous cases.


Misaligned robot in shame from disappointing board room by misunderstanding their ask: "Paint a marketing asset for our new juice that is unique and stands out" - Created with Dall-E

Facebook’s content algorithm is one such case. This algorithm selects what content is shown for each of it’s users on the platform. It does this with one primary goal; maximize user engagement, measured in time spent on the post, likes, comments and shares. However this approach has had severe consequences. The type of content that has shown to result in the most engagement is content that spark hate and outrage. Thus hateful and outrageous content is the type of content that gets shown the most. A dynamic that has destructive societal consequences.


An example of this was In 2016-2017 when Facebook fuelled anti-Rohingya violence in Myanmar. Propaganda and fake news around the muslim Rohingya people in Myanmar was in that period continuously spread on the platform. As a reaction to a wave of attacks from a small Islamist organisation known as the Arakan Rohingya Salvation Army, Buddhist extremist overreacted and launched a full-scale ethnic-cleansing campaign aimed at the entirety of Rohingya community. They destroyed hundreds of Rohingya villages, killed between 7,000 and 25,000 unarmed civilians and brutally expelled 730,000 Rohingya from the country.


Facebook unintentionally fuelled ethnic hatred with their algorithm. They didn’t do it on purpose, and they couldn’t foresee the consequences. However their algorithm did exactly as it was instructed.



How to solve the alignment problem?

Harari’s recommendations for dealing with the alignment problem comes at different levels from technical requirements to political actions. Aligning AI systems is difficult not only because of the nature of the technology itself but also due to the fragmented and often conflicting ambitions of the humans and nations wielding it. With multiple countries pursuing their own visions for AI supremacy, a competitive race for ultimate intelligence is underway, further complicating global alignment.


Yuval Noah Harari's suggestions for societal control of AI alignment

Harari fears how big tech corporation might manage AI alignment. He criticises how some of these companies previously have avoided responsibility of the harms their technologies have caused. When these tech companies has been confronted, for their social harm, blame has often been shifted to users, politicians, or regulators. As for the previous case with Facebook’s involvement in the anti-Rohingya campaign in Myanmar, they avoided responsibility by maintaining that they were merely the platform, appealing to the law, Section 230 of the US Telecommunications Act of 1996, which grants them immunity from liability for user-generated content. Harari strongly disagrees with this stance and provocatively asks, “If the tech giants are simply doing what users demand but simultaneously manipulating those same users, then who is really in control?” For this case Facebook wasn’t in control. Their misaligned algorithm did it by itself. Harari argues that while tech companies should not be responsible for what users post, they should be responsible for how they spread information. If they cannot manage the way their algorithms disseminate content, Harari argues, they are in the wrong business.


“If the tech giants are simply doing what users demand but simultaneously manipulating those same users, then who is really in control?” - Yuval Noah Harari

Addressing alignment also requires thoughtful requirements for how AI is designed. Harari warns against rushing to create AI guided by one ultimate goal—such as always seeking truth— as he believes such rigidity inevitably fails to account for the complexity of human and societal values. He draws parallels to philosophers in history and their ethics, that sought to systematically set rules for how to behave as a moral agent. Harari mentions Immanuel Kant’s deontology and Jeremy Bentham’s utilitarianism as possible approaches for AI alignment, but asserts that rigid adherence to any single of these ethical frameworks, would be insufficient. Instead, Harari gives a more cautious recommendation and instead advocates for certain traits highly intelligent AI should be guided by:


  • Be benevolent - Always actively strive to do good for others and society through kindness, such as not just offering people what they want, but try to understand and answer to what they need.

  • Maintain Socratic wisdom - Understand it’s own nature and acknowledge its limitations and ability to make mistakes. It should appreciate nuance and be able to respond in ways that express doubt, degree of certainty or ultimately admit not to know the answer.


To make sure that AI is deployed safely with proper values, ethical foundations and oversight, Harari promotes regulation. He dispose of the notion that regulation by default stifles innovation and ruins the business. In an interview he mentions how car business certainly haven’t suffered from safety regulations in their business or ruined the quality of their product. Rather he argues that regulations have only improved cars and the industry.


Most importantly, Harari underscores the need for institutional checks and overview. These institutions would function as corrective mechanisms but operate outside the organisations who develop and use the AI’s, to be an independent third party. These institutions would overview and identify when AI systems deviate from alignment and be able to initiate swift corrective action. Similar to how scientific and democratic institutions embrace mechanisms that challenge and rectify errors, with peer review, retractions policies and separation of powers, these independent AI safety institutions, should audit that AI continuously works and evolves in harmony with societal values.



Final remarks

I hope this book review doesn’t discourage you—or anyone—from exploring or utilising AI where it adds value. Instead, I hope this book review highlights the seriousness of AI safety and the need for thoughtful participation. To contribute to this discussion in a meaningful way, it’s important to engage with the technology firsthand, gaining practical insights that go beyond theory.


If you enjoyed this book review, I think you would also like the book review I wrote of The Coming Wave by Microsoft AI CEO Mustafa Suleyman that also assesses AI's societal impact. Also It would mean the world to me to hear from you. Please comment and like it where you found it.


If you would like to get more reviews of the books I've found insightful, or see my latest creations, then subscribe to my newsletter at the footer of my website 📨👇


Have a nice day! 💚🧠👨‍🏭👨‍💻👨‍🎨


0 comments

Recent Posts

See All

Comments


bottom of page