First Sapiens and the Anthropocene

The First Sapiens is us. The ones whose history came to a quite end in November 2022. This is according to anthropologists and historian Yuval Noah Harari. The basis for his argument was that history, all of history as we know it has become quantifiable and digitizable, storable somewhere in the vast ecosystem of AI clouds, never to be challenged for its voluminous content and its meticulous accuracy. So, if he is right and history has ended, what are we left with? The now? Yes. The future? Yes! Can AI mine that future? No. Can it be trained to do so with its learning algorithms and billions of iterations? NO! Is the future different than the past? Yes. Exponentially so.
Let me explain.
We are heading into an era of human existence where a new form of intelligence will be needed, one that is far higher than anything homo sapiens has known. Itâs the only form of intelligence that will save us and what remains of life on the planet. Unlike the minds that drove the First sapiens to built better weapons, better shelter and better living conditions, the new intelligence examines the ethos of these historic endeavors to determine if and how they have diminished our planetâs resilience and her ability to ensure the continuity of life. This is Gaian science, an entirely different level of scientific exploration which has remained in a dormant state that still hasnât been defined, vetted, or validated. It is exponential in nature and it defies all existing scientific methods. It is a new form of planetary science that must examine the degree to which the First sapiens and his insatiable appetite for the modern life has damaged our planetâs ecology. It is the study of ecological collapse at the planetary level that moves at exponential speed and no algorithm or AI model created by the minds of First sapiens has the capacity to understand.
Technology solutionists today believe that the refinement of their innovations will help us revolutionize our future. That is the view of the optimistic, brilliant, yet limited mind of the First sapiens. Second Sapiens, those who have a planetary-systems view of the future remain skeptical about how much digital technologies in their current form and content can contribute to stabilizing a world defined by mega-scale systems and exponential change. Nothing explains the binary nature of First sapiens better than the digital universe it has created. Information, which is the bloodline of the digital age in its elemental form, is binary. Computers store and process data using bits and bytes that can only be in two states: 0 or 1. The binary system has no innate intelligence; it merely allows for efficient manipulation and transmission of information that makes all digital devices from smartphones to supercomputers operate the way they do. Algorithms that run the worldâfrom the largest supply chain spanning the globe to the worldâs biggest financial trading platforms handling billions of dollarsâ worth of transactions a dayâare all encoded using binary sequences.
The digital age that has disrupted so much of our lives, for better or for worse, is fundamentally built on binary information storage and processing. It is the different combinatorials and iterations, the creative sequential and algorithmic programming, that give us the rich complexity of the world and its ever-expanding technology ecosystem as we know and experience it today. But what happens when the most advanced form of AI perceives patterns or objects that are nonexistent and altogether inaccurate? What happens when the computer models designed to help navigate the Anthropocene must do so with insufficient data from the higher-complexity science that is proprietary to that stage of development? Why do computer models still fail accurately to predict the annual rate of the rise of global temperature and of ecological collapse and how quickly polar ice is melting?
Based on past experience, there is little doubt that we can work out the bugs in the current systems of knowledge. We will find AI-based cures for all types of diseases by expediting the processing of genomic data and creating tailored treatment plans unique to each individual. But how do we know what the bugs are in a system in which its data is of a completely different order and remains emergent, changing unpredictably in real time? It took scientists over twenty years to map out the human genome, which makes it possible for AI to mine that data in a fraction of the time it took researchers to uncover it. What Earth-systems knowledge base can programmers use to establish reliable patterns that AI can mine so we can predict our future within reason? Unlike the current ways AI gathers data, will programmers be able to train their models to gather data from the futureâdata that doesnât yet exist? AI could not create those tailored treatments derived from the genomic knowledge base if it werenât for the extraordinary worldwide commitment to fund and support the Human Genome Project for over two decades. Similarly, AIâs role in helping us resolve problems in the Anthropocene epoch must be proceeded by investments and long-term commitments intended to quantify the nature of Anthropogenic Ââ Gaian sciences.
Much of this new science is yet to be uncovered, and, due to its complex and highly interdisciplinary and collaborative nature, patterns of its emergence remain greatly unpredictable. This becomes a challenge to computer programmers attempting to train their data models to follow identifiable patterns when the science that creates patterns has remained beyond quantification and far beyond the linear and binary grasp of First-sapiens intelligence. Due to what remains unknown in the Anthropocene, would computers hallucinate answers the way ChatGPT and other AI generative models do today, and would such hallucinations create more chaos and misinformation that would derail the upward progress weâve made in moving the needle on the Gain sciences learning curve? If we acknowledge this as our new reality, then the question for technology solutionists becomes, How can we build predictive training models in the form of machine learning that help address Anthropocene issues from a knowledge base that has been greatly shaped, defined, and constrained by the deficient motivations of the First sapiens?
I found two possible answers to that question in the work of two individuals who are not technology solutionists but think in systems. The first was from Harari who was quick to qualify what he meant by the end of human history based on his own perception of the evolutionary stages of Homo sapiens. In his 2017 book, Homo Deus: A Brief History of Tomorrow, Harari argues that we have sidelined deo-centrism, the worship of an outer god, in favor of homo-centrism, the worship of ourselves, and that the next stage of evolution would sideline homo-centrism in favor of data-centrism. In interviews and lectures he has given since the release of the different large language models, Harari defends his views on the end of human history by claiming that the operating system of human culture is language. It is from language that we have created human narratives such as myth, law, art, and science, and these are the things that build civilizations. By gaining mastery of language, Harari believes that AI has acquired the master code to human civilization. In some sense, this development could represent what has long been feared and debated: a concept known as the technological singularity in which machine intelligence surpasses human intelligence and forever changes the trajectory of our future.

Unlike past narratives depicted in science-fiction movies, large language models are not violent machines that subjugate human civilization through blood and gore. Rather, they do it through soft skills that affect the mind. They do it by telling alternative stories generated by their algorithms that first and foremost seek to maximize profits and valuations for the companies that create them. The end of human history based on this data-centric narrative is far more dystopian than science fiction can imagine. It will be brought about by us, disintegrating from within. Generative AI, like its younger kin, social media, will continue to exploit our weaknesses, our biases, and our addictions. It will continue to assemble language that vastly expands our social and political polarization, undermines our mental health, and unravels our democracies. Without wise regulatory structures that see through the mirage of technology and learn how to transcend and use it wisely, human history could end the way Harari defines it.
In my work in human and social development, a humanity that is data-centric, in a free-market economy that monetizes data, is nothing more than an extension of free market ethos operating without government supervision. Before large language models, language spoke to different people and different cultures at all stages of development. We fought against the dark forces of the unhealthy side of these stages to unshackle ourselves and move up to higher levels of psychological freedom. This is the nature of the evolutionary process that enables our spirit, our never-ending quest to continue. It does not signify the end of human history; it is the transcendence of First sapiens, more specifically, stage five in First sapiens development that seeks to manipulate the world its reductive sciences and algorithmic modeling.
In order for us to tap into Second sapiens intelligence, there needs to be a global ecology of wise governance that is in tune with our environmental and digital challenges, and not beholden to the values of the industrial age and neoliberal economics. The ideal candidates for this crucial transition will be those who see the simplicity beyond the algorithmic complexity. Tristan Harris and Aza Raskin, the cofounders of the Center for Humane Technologies, along with the 1,100 technologists who in March 2023 asked our government to place a moratorium on AI development, will be ideal candidates that fill the technology regulation part of that form of governance. Becoming part of that complex adaptive system can transform the end of human history into informational units that serve the Anthropocene.
Harariâs narrative on the evolutionary sequence of Homo sapiens led me to search the unpublished archives of Clare Graves, the academic behind the model I use in my work. I wanted to explore his views on technology and the role it plays in our psychosocial, evolutionary process. That is where I found the second answer to my question of how we can build predictive training models in the form of machine learning from a knowledge base constrained by the deficiencies of the First sapiens. Unlike Harari, Graves was very cautious in predicting the precise details of our future, especially when that future entails our ascendence into GaianSecond-sapiens intelligence known for its exponentially higher degrees of neurological and psychological activation. Here is the place where we  must examine the failures inherent in the reductive First sapiens sciences that have contributed to ecological collapse.
 Â
Graves believed that while human development in stage seven in the table above, will represent an exponential growth in intelligence, technology will only be a quantitative extension of the lower stages of development. He made that prediction in the late 1970s. In examining how his hypothesis has withstood the test of time, one might think that advancements in artificial intelligence and machine learning that were beyond Gravesâs grasp at the time would have rendered his thinking obsolete, but that may not be the case. As complex as the digital world is today, with all its complicated iterations and the various creative programming that gives it form, the best it can do is mine knowledge of our human experience that is part of our present and our past. Even with its predictive powers, it cannot give us a reliable, nonlinear representation of the future, especially if that future represents a partial reversal of the past and is defined by an exponentially higher level of psychosocial intelligence that seeks to preserve what remains of planetary life.
Generative AI and other forms of machine learning will continue to expand our intellectual rigor and raise our cognitive intelligence. They will even help us articulate some Second-sapiens concepts, but these improvements are quantitative and will come at a high cost; that is, the more machine learning we rely on, the more we will lose our uniquely human qualities. Virtues such as emotional and spiritual intelligence become diluted in an ecosystem designed for a data-centric society. We are becoming less and less equipped to handle uniquely human problems at a time when we most need to do so. Ultimately, when the time comes to transcend the ideology of data-centrism, we will realize that the idea of technological singularity in which machine intelligence surpasses human intelligence is a fallacy and that AI will reveal to us what Homo sapiens is by revealing the things it cannot do.
The more our Anthropogenic reality comes into focus, the more it will become necessary for us to reverse the corrosive aspects of our present and past and create new ways of being and thinking. The intelligence that defines those virtues is just beginning to emerge and will eventually serve as the new reservoir of knowledge in which machine intelligence is recognized for what it isâa utility subordinate to the human wisdom that helps all life forms on the planet survive and thrive.