By Preston Estep III, PhD and Alexander Hoekstra
Originally written April 2014. Published 2016, A. Aguirre et al. (eds.), How Should Humanity Steer the Future?, The Frontiers Collection, DOI 10.1007/978-3-319-20717-9_5
Abstract
Humanity faces many critical challenges, many of which grow relentlessly in seriousness and complexity: declining quantities and quality of freshwater, topsoil, and energy; climate change and increasingly unpredictable weather patterns; environmental and habitat decline; the growing geographical spread and antibiotic resistance of pathogens; increasing burdens of disease and health care expenditures; and so on. Some of the most serious problems remain intractable, irrespective of national wealth and achievement. Even developed nations suffer from stubbornly stable levels of mental illness, poverty, and homelessness, in otherwise increasingly wealthy economies. A known root cause of such broken lives is broken minds. What isn’t widely recognized is that all other extremely serious problems are similarly and equally intertwined with the intrinsic incapacities of human minds—minds evolved for a focus on the short term in a slower and simpler time. Yet minds are also simultaneously the most essential resource worth saving, and the only resource capable of planning and executing initial steps of necessary solutions. There is hope for overcoming all serious challenges currently facing us, and those on the horizon; yet there is only one most efficient strategy that applies to them all. This strategy focuses not on these individual and disparate challenges—which ultimately are only symptoms—but on fixing and improving minds.
Background
Humanity faces many serious challenges. Some already appear imposing, yet grow relentlessly in seriousness and complexity. Critical resources are in decline in much of the world. Quantities and qualities of clean air, fresh water, and topsoil are diminishing1. Production levels of critical non-renewable resources (such as oil) have peaked in most of the world. Climate change and increasingly unpredictable weather patterns make regular news. Border skirmishes and wars still break out routinely in many areas of the world. Other notable challenges include environmental and habitat decline, the growing geographical spread and antibiotic resistance of pathogens, increasing burdens of disease (especially in growing numbers of elderly) and health care expenditures, a potentially catastrophic asteroid strike, and so on.
Immature Science
A recent poll by The Pew Research Center shows that most in the U.S. expect science and technology to come to the rescue—a view likely shared by an increasing number of people in other countries2. Although those polled have a favorable view of technological progress generally, the poll also indicates that many specific advances are regarded with suspicion or even trepidation. This dichotomy reveals the uneasy historical relationship between a general perceived need for betterment, and the implementation of potentially disruptive specific ideas or technologies. Even the practice of science itself had trouble gaining initial traction, since it historically required that a single individual propose a new idea that challenged prevailing orthodoxy.
Modern discoveries in genetics show us that human populations separated and have lived in essential isolation from each other for at least 50,000 years, and we know that people from all separated branches of the family tree are able to do science3. It is very unlikely that separated human populations experienced universal convergent evolution toward scientific ability, and much more likely that humans at that time of divergence were capable of science. Yet the age of modern science is probably less than 500 years old—only about 1% of the time since populations split. Understanding why science is so unnatural, and took so long tells us much about human nature and our inherent resistance to change. It also helps us chart our best possible course to the future.
Science and engineering are considered inseparably intertwined in the modern world, but this hasn’t always been so. Engineering was quite advanced prior to modern science. For several thousand years, humans have been designing and building amazingly complex and sophisticated roads, bridges, aqueducts, buildings and amphitheaters. Consider the Egyptian pyramids—feats of exceptional engineering. They are over 4500 years old, and even far older monuments and artifacts stand as persuasive testimony to the very long history of engineering. Effective tools and weapons were being made well over 1 million years ago. So why is science so young? Let’s begin at the official beginning.
Though exact dates are disputed it is a generally held convention that the year 1543 launched the Scientific Revolution4. Andreas Vesalius published the first work of scientific physiology and Nicolaus Copernicus published his revolutionary claim that the earth orbited the sun, rather than the other way round. Copernicus withheld publication of his heliocentric theory for many years—until 1543, the year of his death—because he feared the repercussions. Copernicus had very good reason to fear, and even if he’d lived another century he might have chosen the same course. Galileo Galilei’s observational evidence from the early 1600s in support of the Copernican theory was dealt with harshly by the Roman Catholic Church, and he spent almost the last decade of his life under house arrest, dying in 1642. Important advances in science and mathematics were made throughout Europe for the remainder of the 17th century, most notably by Sir Isaac Newton, but Newton and other scientists were very guarded about their religious views and were very careful to explain away any possible contradictions their findings might present to accepted religious orthodoxy. In 1697 Thomas Aikenhead was the last person hanged for blasphemy in Britain. The 18th century brought more but still slow and gradual change in the perceptions of science.
Over two centuries after Galileo’s death, and a century and a half after Aikenhead’s execution, Charles Darwin—like Copernicus three centuries before—feared the repercussions of his revolutionary ideas, and delayed publication for as long as possible. Darwin might have followed Copernicus’ example, and waited until death was imminent to publish his theory, but a letter from Alfred Russel Wallace, describing his own formulation of essentially the same theory, compelled Darwin to publish. He did so fretfully, fully aware of the still-restrictive social climate and history of persecution—and even execution—of those who dared contradict official church dogma. The newness of science can be more fully appreciated by another development during Darwin’s life: when Darwin began his famous voyage on the Beagle in 1831, the term scientist didn’t even exist; it was only in 1883 that William Whewell coined the term5.
These historical details underscore the recency of modern science, and strongly suggest at least one powerful reason why it took so long to take hold: people feared contradicting powerful religious dogma. But is that explanation fundamental, or is there a deeper level to this mystery? And why does opposition to certain scientific findings increase as supportive evidence does, as happened in the Galileo case, and as is happening even today in some areas, most notably evolution? Fundamental and retrospectively obvious discoveries are still made, and their apparent obviousness forces people to wonder how they remained undiscovered for so long. Many who fruitlessly prospected the same intellectual territories, but habitually overlooked the now-obvious riches are secularists and even self-described atheists.
Is it possible that conventionalism, rather than religion per se, is the more fundamental problem? We can’t ignore such strong evidence—maybe not pointing away from religion so much as pointing toward more fundamental human limitations as ultimate motivations for persecution of ideas that catalyze social upheaval. When important truths lie long undiscovered, and we are seduced into wondering how so many could have been so blind for so long, we should take a moment to realize that a vast treasure of undiscovered truth still lies in plain view before us all. The now obvious wasn’t at all obvious a short time ago, and the completely non-obvious will soon be obvious—that is, once someone has done the difficult work of overcoming the innate conventionalism of the human mind.
A Mind Lost in Time
The fact that science is so young has important implications for our future. Most importantly, it provides convincing testimony that human minds are not good at science. Some minds are better than others at science, but the basis for a substantially better future is the acknowledgment that the human mind in its current form is insufficient for certain critical challenges now facing humanity. Albert Einstein, who is considered one of the greatest scientists in history, remarked (during the year following the atomic devastation of Hiroshima and Nagasaki) that “a new type of thinking is essential if mankind is to survive and move toward higher levels”6. James Watson, the co-discoverer of the structure of DNA, directs characteristically blunt criticism at scientists, saying “most scientists are stupid.” Watson explained: “Yes, I think that’s a correct way of looking at it, because they don’t see the future”7. Understanding the present well enough to predict the future with reasonable accuracy is an extremely important type of intelligence, and it contributes to good science. Nevertheless, Watson’s relativism excuses the failings of better scientists. Again, humans are not good enough at science, and that means all humans. This point is sure to be contested, but alternative explanations are very weak or simply unacceptable.
Those who counter that some people are sufficiently good at science must confront the unavoidable ethical dilemma accompanying such a belief: they either don’t believe science has the power to fix human problems and assuage suffering, or they don’t care to assuage it*. Generalizing from the abundance of caring scientists we know leaves only one explanation consistent with all evidence: human minds as they currently exist are not capable of effecting our most desirable present and future. When we consider that our future depends fundamentally on our minds, both the challenges and the most efficient solution are made clear.
Here is a key question: why should we try to cope with modern, complex civilization, using brains provided by nature for use in a simpler time; brains that have been shaped and constrained by forces that are either irrelevant or quickly becoming so? For example, consider the expense of brains over evolutionary time. The human brain is very large for body size, relative to other species, and countless women have died in childbirth (and still do) as the size of the brain increased well beyond the typical ratio found in other species. Both fetal head size and the additional food energy required in the mother’s diet ensured that in utero brains were under strict constraints that have become more relaxed.
Furthermore, the adult human brain is about 2% of total body weight, but generally consumes more than 20% of daily food energy intake. As a result, making a bigger brain has been very expensive over evolutionary time. Harvard anthropologist Richard Wrangham has advanced the compelling hypothesis that fire was of primary importance in human evolution because cooking allowed a quantum leap in the amount of energy obtained from a given piece of food8. He suggests that this critical advance helped to launch a phase of rapid evolutionary change in the size and power of the brain. Several important elements needed to be in place in order to discover and exploit fire, but one of them was sufficient intelligence, and that type and level of intelligence was further amplified by the reliable domestication of fire.
This general strategy of developing and using technologies has ultimately leveraged existing intelligence through incrementally higher types and levels of intelligence over evolutionary time. Such “bootstrapping” has been selected for because there are reproductive rewards that accrue to an organism able to adapt quickly to new niches, or even able to create or modify existing niches to better suit their existing biological limits. These are essential features of what we think of as higher intelligence.
Even though this process requires expensive fuel for nature’s tinkering on the brain, sentient life’s most metabolically costly organ, this expense reduces the cost of useful information. This inverse relationship must have been fundamental to the evolution of cognition, and it suggests a question: is there a point in the evolutionary process where useful information becomes so costly that the price of building a better brain is too high? The answer must be yes. Even a large and powerful brain is confronted by challenges that are potentially rewarding, but for which optimal answers cannot be found soon or in the local environment. Even for countless simpler problems, the set of possible solutions is infinite and only some are practical and efficient. Random trial and error explorations of an infinitely large “solution space” will not often be rewarded. There are many types of information that might benefit us, but many are extremely expensive to both acquire and maintain. Given that brains are expensive, and that information can be both difficult to acquire yet extremely valuable for survival and reproduction, there will exist a constant tension—an unbridgeable gap—between what we have and what would benefit us more. UCLA anthropologist Rob Boyd and UC Davis evolutionary sociologist Pete Richerson have extended economic theory into the study of evolution and focus primarily on the acquisition of knowledge. Boyd and Richerson’s “costly information hypothesis” is premised on the idea that when information is costly to acquire, it pays to rely upon cheaper ways of gaining information, and these are generally obtained through social interaction and instruction9. Note that their hypothesis is essentially just another way to say brains are expensive, except that they focus on the cost of information rather than the cost of the mindware (in this case, brains) needed to process that information.
In general, it is cheaper to learn from or mimic someone else’s sequence of words, actions or expressions than to learn a complex behavior by experimentation. When information is dangerous, time-consuming, or difficult to acquire and process, learning by mimicking others will be selectively advantageous. Such a strategy for acquiring new information has obvious implications for adherence to convention, and for constraining innovation, including in the sciences. Boyd and Richerson have built a very solid formal foundation for this theory, and they make a compelling case that it explains many apparently maladaptive behaviors. As we consider the evolutionary tradeoffs that have shaped the human mind, and acknowledge that essentially all the evolutionary constraints and costs of building better brains and other thinking machines have declined substantially or disappeared, we are left to ask again, why should we continue to struggle to get by with brains mismatched to the complex world we now inhabit?
A Fundamental and General Solution
Typical proposals for reducing the impact of problems faced routinely by people in all parts of the world focus on treating symptoms rather than root causes. There is often no commonality of goals, and no sharing of resources produced for each of the litany of serious problems facing humanity today. In fact, the opposite is true: many strategies for solving disparate challenging problems compete for funding and attention. It would be highly beneficial for people to begin thinking more efficiently, cooperatively, and synergistically, and seriously consider more fundamental solutions that can be applied to problems more generally.
The most efficient and general solution to all human problems is to enhance our fundamental abilities to solve problems. A dizzying multitude of technologies have been developed for enhancing our physical selves and environments. Tools and techniques have been created to feed, clothe, and care for our material wants and needs. We have, with machines of human design, wrangled rivers and moved mountains; we routinely fly people around the globe and sometimes even into space; we have tapped the planet for its finite bounty, to suit our immediate desires. But this enhancement of humankind’s physical abilities has expanded at a greater rate than our capacity to wield such power responsibly, and to foresee the long-term consequences. Only recently—only through this young mode of problem solving that we call science—has a realistic approach to enhancing our innermost selves become conceivable.
Increasing and refining human abilities to solve problems is not a new endeavor. Modifying the mind is a practice visible in every classroom around the world. The act of instruction originated before recorded history, and indeed, before humanity. Learning through traditional means physically changes the structure of the brain, but is slow and inefficient. A complete professional education, from primary school through college and then graduate school, is expected to take well over two decades. Education is the best technology currently available to alter human minds, but it is demonstrably too slow and too narrow to address and surmount the complex threats we face. Education is alteration, but it is not enhancement; it falls short of fundamentally augmenting the evolved potential or upper limits of the mind.
Many leading scientists and technologists recognize the fundamental importance of better problem-solving abilities and favor the pursuit of Artificial Intelligence (AI). We agree that AI is important and we regard some AI efforts as extensions of human minds. And we accept the general classification of natural and artificial intelligence under the umbrella term “mindware”10. However, the creation of “standalone” AI that has its own interests and goals, potentially separate from those of humanity, is an uncertain proposition that has unsettled many futurists. A primary worry is that AI will view humans as short-sighted, irrational, and excessively aggressive, and it will arrive at the only possible logical deduction: extermination of humans in the name of self preservation. To circumvent such an outcome, AI might be created with an immutable friendly bias towards humans, or with an absolute dependency on human caretakers or symbionts. But these systematic constraints must be perfectly inviolable, which will depend completely on the mental capabilities of the systems’ creators.
If AI is created carelessly, we agree that the probability of the AI doomsday scenario is substantially greater than zero. But we disagree with an exclusive focus on limiting the abilities or powers of AI to respond to the dangers to AI posed by humans, since humans are a more general threat that is no less real or serious outside the context of AI. Those who worry about the AI doomsday scenario, but who focus exclusively on the AI side of the equation, implicitly validate our belief that the limits of human minds are the more fundamental problem. This can be seen in the apparently paradoxical answer to the question of whether the human mind or AI is more unpredictable and dangerous to the human future. If we believe an AI would be a highly astute judge of risks, even an answer of AI betrays a belief in potentially catastrophic human mental limitations.
This logical form can be generalized universally to reduce the complexity of the landscape of vexing challenges and proposed solutions. Most relevant in the context of this essay contest, concerns about the future of humanity and civilizational risk** reduce to a more fundamental concern that our minds are insufficiently able to appreciate and/or handle the challenges before us. Better mindware is arguably the only technology capable of counteracting the myriad complex obstacles, problems, and threats facing humanity (including or especially those for which humanity played a contributing role), and better human minds are indispensable even to the pursuit of a general AI. Thus, better minds provide a truly fundamental and general solution, and to our knowledge, no other problem-solving approach is worthy of such a claim.
The Path to a New Mind
Some of the most threatening global problems have remained tenaciously intractable over the past decades, irrespective of national wealth and technological achievement. Even developed nations suffer from stubbornly stable levels of mental illness, poverty, homelessness, crime, and incarceration in otherwise increasingly wealthy economies. Many interventions have been tried, in an effort to reduce poverty and homelessness, including provisions of social services, food allowances, housing benefits, employment resources, various kinds of training and education for all age groups, so-called microloans and other loan guarantees, and so forth. But careful research shows that the primary driver of apparent cycles of social ills is the mind: mental health services improve social conditions, but improved social conditions do not improve mental health and functioning11.
Mental health research and treatment represents a gateway to the unprecedented and uniquely important enhancement of human minds. Technologies spanning across the fields of genetics and genomics, synthetic biology, neuroimaging, brain-machine interfaces, and others are becoming increasingly powerful, with immediate applications for understanding and treating mental dysfunction and disease. However, these developments are relevant beyond treating mental illness. Given that even the most “normal” human mind is in many ways disabled by naturally imposed limitations, research focused initially on mental illness can provide entree to a more general research platform for mind engineering. This engineering provides a possible escape from outdated and destructive cognitive constructs, which produce and exacerbate human suffering and civilizational risk—but we must be very careful in the design and creation of new and better mindware.
It is essential to recognize that limits of even normal or high-functioning human minds are not only quantitative, e.g. processing speed or memory capacity; minds are also limited qualitatively in the kinds of biases they exhibit and types of errors they make. Daniel Kahneman’s 2011 book Thinking, Fast and Slow, became an instant classic in human psychology and decision making12. In it he reviews a wide range of empirical tests of beliefs and behaviors, and concludes that people exhibit many biases including a “pervasive optimistic bias,” which he says might be “the most significant of the cognitive biases.” While such a bias might seem preferable to others, Dr. Kahneman says that it regularly results in unrealistic and costly decisions. Decades of research support Kahneman’s claim that the optimistic bias is pervasive. In 1969, Boucher and Osgood suggested that languages have an inherent positive bias13, and as of 2014 this hypothesis has been confirmed in all languages tested14.
How can this be? How might evolution reward unrealism, ultimately producing a mind that creates an internal mental image that is discordant with external reality, and even with its own knowledge of itself? In a now famous foreword to the 1976 First Edition of Richard Dawkins’ The Selfish Gene, evolutionary biologist Robert Trivers established a new perspective on how evolution shapes the mental realm. He wrote “If … deceit is fundamental to animal communication, then there must be strong selection to spot deception and this ought, in turn, to select for a degree of self-deception, rendering some facts and motives unconscious so as not to betray—by the subtle signs of self-knowledge—the deception being practiced”15. Harvard psychologist Steven Pinker suggests that this single sentence “might have the highest ratio of profundity to words in the history of the social sciences”16. Trivers’ catalytic insight helps us to understand how evolutionary forces might create unrealistic and self-deceiving mental architectures, wherein unrealism isn’t just a random or unselected trait—or a trait against which selection acts—but a purposely selected trait. Even prior to this important change in perspective, scientists in many areas had provided empirical evidence of the flaws of normal and even high-functioning minds. And in the decades since, many psychologists like Kahneman have provided strong empirical support for this counterintuitive idea that evolutionary selection can favor varying kinds of unrealism—with excessive optimism being only one of many.
These theoretical and empirical revelations about how human minds actually function have profound implications for research and development of both natural and artificial intelligence, but these implications are widely unrecognized or underappreciated. Some have advocated enhancing human intelligence absent apparent concern about rationality or realism. Others have proposed the construction of a general purpose AI or artificial mindware that is based on the function—and in some cases even the physical architecture—of the human brain17-19. However, we are unaware of the realistic portrayal of human brain function, and its intrinsic biases and limitations in these proposals. In contrast to the common portrayal, the mind has not evolved to produce accurate internal representations of external reality, or even of its own internal processes and views. So, models or emulations of the brain as it exists will not and cannot produce a general-purpose, dispassionate, and realistic problemsolving mindware. A more likely product of such efforts is mindware possessing typical human faults, including routine unrealism and irrationality. What might be the outcome of empowering self-deceiving mindware with superhuman intelligence and powers of self improvement? One possibility is that it would improve itself on a trajectory of increased realism and avoid causing serious harm in pursuit of unreasonable goals, but we simply cannot predict what course it might take. We take a similarly cautious view of enhancing human intelligence across the existing spectrum of human (un)realism and emotional (in)stability.
These thought experiments highlight the importance of enhancing traits of existing minds in a preferred order, and in the creation of an AI with certain improvements relative to humans. Space constraints don’t permit a thorough consideration of trait prioritization but two points are worth mentioning. First, evolutionary forces select for short-term reproduction over longer term sustainability; therefore, one challenge is to progressively de-emphasize short-term “band-aid” approaches to vexing problems, and to increasingly emphasize long-term approaches for growing and stabilizing civilization. Second, great care must be taken to establish a priority order even for preferred traits; there are few mental traits (maybe as few as one) that should be the initial focus of a trait prioritization plan. Bearing in mind these two points, consider near-term self-centered happiness and long-term rationality as two exemplary complex traits. Envision the enhancement of near-term happiness absent a minimum level of long-term rationality. A reasonable case can be made that such enhancement already exists in addictions to drugs, such as heroin or cocaine. Similarly, enhancement of intelligence absent rationality or certain other emotional stabilizers might be equally dangerous to long-term interests of self or others.
Our intention here is not to present—or even begin—an ordered list of preferred traits, but to catalyze discussion, research and development of better mindware. An important element of that effort is to focus on desirable traits neglected by or selected against by evolutionary processes. In that spirit, and in agreement with certain efforts already underway20, we suggest that long-term rationality is a candidate for initial enhancement efforts. We believe this high-level trait embodies multiple narrower traits, including some consistently overshadowed throughout natural evolution by short-term self interests: empathy, group interest, quantitative long-term modeling and prediction, among others. One question in the pursuit of better mindware is “how will we produce mental traits that are beyond current human limits?” We can only offer the observation that the creation of “supernormal” traits obviously occurred routinely throughout evolutionary time, and the belief that such bootstrapping should not be beyond the reach of the best human science and engineering. At each successive step up the scale, supernormalcy will become the new normal, and so on into the future.
Comment and Summary
To answer the question posed by this essay contest, “How Should Humanity Steer the Future?”, rather than provide a detailed plan, we argue that there is a single most-efficient overall focus on R&D of better mindware. We thank the many people who conceived, managed, and judged this essay contest, and we hope it provides a watershed moment in the discussion of civilizational risk. The submitted essays provide an excellent resource for advancing this discussion. The central recommendations of the essays reveal a typical propensity even among highly intelligent and educated people to treat secondary phenomena (symptoms) rather than root causes, validating one important pillars of our argument. We nevertheless concede that the outstanding prizewinning essays provide compelling reasons for immediate focus on a few critical areas in addition to a focus on mindware. But we are especially gratified to see that aside from our piece, some other fine entries—including the First Prize essay by Sabine Hossenfelder—focused on the most fundamental determinant of civilizational success or failure: human minds and other mindware. We are confident that our essay took Third Prize because of the superiority of the essays that finished ahead of ours, and not because our premise is unsound.
Minds are central; they are the foundation of humanity’s past, its present, and its future. Human minds are the root cause of all problem-solving inefficiencies, but they are also the only creative engines capable of taking on each of these challenges, and of designing and building a better future. The evolution of the human mind allowed us to rise to a position of pre-eminence on our planet, but a rise to dominance in the past does not presage control over the future. As circumstances change dramatically, so must our thinking—and ability to think—to survive and thrive indefinitely into the future. All people—especially scientists and engineers—who are interested in building the best possible future must contribute to humanity’s effort to design and build better mindware. This is the greatest-ever challenge in the history of humankind. Among the countless billions of species to ever inhabit planet Earth, it is only ours that has the unique privilege of taking this bold step. We owe it to our descendants that they should have more and better than we have, and that they are more and better than we are, yet they depend completely on us to rise to this challenge.
Footnotes
*The argument that overall progress is slow because science is inevitably slow is a conventionalist fiction that conflates human inefficiencies with scientific ones. Consider the practice of science and engineering at the highest imaginable level (for argument, consider god-like abilities). We take it as given that a being with such abilities would be capable of assuaging most or all human suffering in short order
**Civilizational risk is our preferred term for what many call existential risk. It establishes a lower limit for acceptable risk since we value civilization and neither individual nor group existence is threatened by many threats to civilization.
References
- Paulson, T.: The lowdown on topsoil: it’s disappearing. http://www.seattlepi.com/national/article/The-lowdown-on-topsoil-It-s-disappearing-1262214.php. Accessed 8 Apr 2014
- Smith, A.: Future of Technology | Pew Research Center’s Internet & American Life Project. http://www.pewinternet.org/2014/04/17/us-views-of-technology-and-the-future/. Accessed 19 Apr 2014
- Armitage, S.J., et al.: The southern route out of Africa: evidence for an early expansion of modern humans into Arabia. Science 331(6016), 453–456 (2011)
- Scientific revolution—Wikipedia, the free encyclopedia. http://en.wikipedia.org/wiki/Scientific_revolution. Accessed 13 Feb 2015
- Whewell, W.: (Stanford Encyclopedia of Philosophy). http://plato.stanford.edu/entries/whewell/. Accessed 18 Apr 2014
- Einstein, A.: Albert Einstein—Wikiquote. Wikiquote (2014). http://en.wikiquote.org/wiki/Albert_Einstein Accessed 18 Apr 2014
- Watson, J.: PBS—Scientific American Frontiers: The Gene Hunters: Resources: Transcript. (2014). http://www.pbs.org/saf/1202/resources/transcript.htm. Accessed 18 Apr 2014
- Wrangham, R.: Catching Fire: How Cooking Made Us Human. Basic Books, New York (2009)
- Richerson, P.J., Boyd, R.: Not by Genes Alone: How Culture Transformed Human Evolution. University of Chicago Press, Chicago (2005)
- Rothblatt, M.A.: Virtually Human: The Promise—and the Peril—of Digital Immortality, 1st edn. St. Martin’s Press, New York (2014)
- Lund, C., et al.: Poverty and mental disorders: breaking the cycle in low-income and middle421 income countries. Lancet 378(9801), 1502–1514 (2011)
- Kahneman, D.: Thinking, Fast and Slow, 1st edn. Farrar, Straus and Giroux, New York (2011)
- Boucher, J., Osgood, C.E.: The pollyanna hypothesis. J. Verbal Learn. Verbal Behav. 8(1), 1–8 (1969)
- Dodds, P.S., et al.: Human language reveals a universal positivity bias. Proc. Natl. Acad. Sci. USA 112(8), 2389–2394 (2015)
- Dawkins, R.: The Selfish Gene. Oxford University Press, Oxford (1976)
- Pinker, S.: Representations and decision rules in the theory of self-deception. Behav. Brain Sci. 34(01), 35–37 (2011)
- Kurzweil, R.: How to Create a Mind: The Secret of Human Thought Revealed. Viking, New York (2012)
- Markram, H.: The blue brain project. Nat. Rev. 7(2), 153–160 (2006)
- Hawkins, J.: On Intelligence, 1st edn. Times Books, New York (2004)
- The Long Now Foundation. http://longnow.org/. Accessed 2 Mar 2015