Subtitled “Why We Never Think Alone”
(Riverhead Books, March 2017, 296pp including 30pp acknowledgements, notes, and index.)
This is a book that I’ve thought of as a companion to the O’Connor/Weatherall book I just reviewed ever since they’ve been sitting together on my TBR shelves for the 6 or 7 years since they were published. Their themes seemed adjacent, their dustjackets are similar, and each has two coauthors, none of whom I’d heard of before.
This one, too, has an appealing, counter-intuitive premise. Many people like to think they have a good general knowledge of things, and often particular knowledge if they are some kind of specialist or expert. If fact, most people know far less than they think they do, even about things they *thought* they did. This touches on two of my running themes:
1, The world is more complex than most people think, and no one can know but a tiny fraction of it. Further, it takes years to become expert on complex subjects, as the world becomes more and more complex; and so experts in one field are just as ignorant as an average person in other fields. (This general observation is why it’s absurd and presumptuous to think that someone can “do their own research” and come to some useful conclusion in 20 minutes or an hour that generations of earlier people — the genuine experts — somehow missed. Doing that is motivated reasoning and confirmation bias; all you find by googling the web are what other people want you to find who have similar biases.)
2, There’s a famous quote by Robert A. Heinlein that runs thus:
A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.
To which I say, nonsense, at least in the contemporary world. People can’t help but specialize in one thing or another. Heinlein’s statement might, might apply to a frontier military culture of hundreds of years ago (or perhaps to a frontier planet culture, as in the book the quote is from), but it doesn’t now. On the other hand, there are sets of skills that it would be nice if everyone were conversant with, to thrive in the modern, interconnected, multicultural society (only a couple of which match Heinlein’s list). Things like how to manage finances, understand how the government works, how to avoid con men and conspiracy theorists, understand probability and statistics, and so on. But that’s another topic.
\
This book, then, would seem to be about how we all live in an interconnected world and depend on exchanging knowledge with each other. We can’t all be financial experts or heart surgeons, but such people are available when we need them. Just as we’re available to provide our own expertise, whatever it might be, to those who can use it.
And like the book I just read, this one does include my going-in notion, but it also includes additional layers of meaning and new perspectives on familiar ideas. At one point it even states the notion I was looking for in the previous book — that misinformation persists because social pressure would be ostracizing were one to change one’s mind. (Which is why I’ve said: you can’t think in any kind of big group; avoid stadium crowds, church congregations, and political rallies if you want to think on your own and reach any kind of objection truth.)
\
So once again I’ll copy all my notes and comments about this book into this blog post, then polish and trim down. Then return here to summarize key points.
There are three big themes: Ignorance, the illusion of understanding, and the community of knowledge. Here’s a line or two about each of the twelve chapters and conclusion.
- People overestimate how much they know, even about ordinary things; the world is more complex than people appreciate
- We don’t remember much detail because we don’t have to the brain evolved to survive problems in living, not to retain information
- We think in causal terms, but even this is limited; we tell stories to teach us how the world works, and bring causal sense of sequences of events
- Our naive understanding of physics doesn’t correspond to reality; we know just enough to get by, within the realm of daily experience. Intuitive answers are usually wrong; deliberation is best done with other people
- We relegate much of our knowledge of the world to the world itself, because the world usually does make sense (within the realm of daily experience); thus the brain is in the mind, not vice versa
- Hunting was a communal exercise, and humans have evolved to cooperate and collaborate in a cumulative culture; big advances require teams of thousands of people; most of what individuals believe about the world is what they’ve been told; we rely on experts to build everything around us
- Technology extends thought by expanding knowledge held in the world, but it has no intentionality, it does only what is programmed; crowdsourcing works well when experts are encouraged to participate
- Opposition to science comes partly from lack of understanding, but also due to social pressures that inhibit people from changing their minds when new information becomes available; the causal models we think in terms of can result in many intuitive ideas that are wrong, and lead to rejection of technology and scientific conclusions
- In politics, the less informed people are, the stronger their opinions. It’s easy to use the internet to validate what you already believe. And people resort to ‘sacred values’ to avoid thinking about practical consequences; but strong leaders listen to experts.
- Being smart isn’t about IQ, or any particular definition of intelligence; it’s more about the degree to which an individual contributes to group success
- Becoming smart involve activities as well as knowledge, and trust in the institutions and principles of science; if a claim is false, it will eventually be found out; consider who makes a claim, where it was published, who profits.
- People don’t want too much information; it’s better to think communally, and rely on community beliefs of how things work
- There will always be ignorance, it’s inevitable; and people aren’t very good at assessing their own ignorance (Dunning-Kruger). Nor can we avoid illusions, which we live with intentionally much of the time, and perhaps they are necessary for the development of human civilization. Intelligence resides in the community, and society has come far because most people cooperate most of the time.
\\\
Full summary with comments
Introduction: Ignorance and the Community of Knowledge, p1
Authors recall a fusion bomb experiment in 1954 in which the blast was larger than expected, by a factor of 3. Why the miscalculation? A fundamental paradox of humankind: we’ve accomplished so much, yet still are capable of hubris, of error. How have we mastered so much despite how limited our understanding often is?
P4, Thinking as Collection Action. Cognitive science emerged in the 1950s, and includes learning about what individual humans *can’t* do. Human minds are not like computers; our intelligence resides in the collective mind. We rely on knowledge stored elsewhere, especially in other people.
P6, Ignorance and Illusion. “Most things are complicated, even things that seem simple.” Consider toilets. Can you explain how they work? People aren’t ignorant, but they’re more ignorant than they think they are. But is it important to know such things? How about within your area of specialty? Example of why Japan attacked a naval base in Hawaii. Details and context get lost over time as myths emerge that simplify and make stories digestible. Most people have no time to study very many events. Why do we live in an illusion of understanding?
P10, What Thinking Is For. There are various reasons why thought could have evolved; they all serve the larger purpose of enabling action. Evolution selected those organisms whose actions best supported their survival. Mental faculties that can process information are the best tools. How do people think? They reason about how the world works, about causality. And thought is masterful at extracting only what it needs, and filtering out everything else.
P13, The Community of Knowledge. We live in a world surrounded by knowledge. We have experts we can consult. Books. The internet. We have access to more knowledge than ever before. And different people develop distinctive areas of expertise. We collaborate. We share. We share intentionality. The knowledge illusion comes from not seeing what is inside and outside our heads. We don’t know what we don’t know.
P15, Why It Matters. All of this should help us understand our limitations, and how much we should rely on experts vs individual voters. Currently there is immense polarization on the American political scene. Both politicians and voters don’t realize how little they understand. Complexity abounds. But people tend to affiliate with social dogma. 16b. The community shapes our beliefs and attitudes. Page 16:
Instead of appreciating complexity, people tend to affiliate with one or another social dogma. Because our knowledge is enmeshed with that of others, the community shapes our beliefs and attitudes. It is so hard to reject an opinion shared by our peers that too often we don’t even try to evaluate claims based on their merits. We let our group do our thinking for us.
We share biases, e.g. about heroes who save the world all by themselves, and how individuals are given credit for major breakthroughs. But nobody in the real world operates in a vacuum. People generally work in groups. Technology becomes so complex no individual understands it, e.g. a modern airplane. Other implications: we tend to operate in teams.
One, What We Know, p19
Accidents happen all the time, in part because we overestimate our understanding of how things work. This has been studied. People were asked how well they thought they knew something, and then were asked to explain it. Zippers. Their estimate of their knowledge drops after they realize they can’t explain it. Another example about bicycles.
P24, How Much Do We Know? Can we estimate this? Thomas Landauer tried to do this in the 1980s. He made estimate in various ways, which converge to around 1 gigabyte. Relatively puny compared to computer memory. Our minds are not computers. Only when we think deliberately. [[ this is Kahneman’s System 2 ]] Most cognition consists of intuitive thought beneath the surface of consciousness. Consider different sources of complexity. Cars are too complicated to tinker with. And much other modern tech. the natural world is even more complicated. Consider a biology textbook. Examples, 29. The weather. Some things aren’t understandable even in principle. Military strategy. Rumsfeld’s unknown unknowns. 9/11. Stock trading. Earthquakes. Fractals are complex however closely you look at them. Coastlines. Even everyday objects. A hairpin. Combinatorial explosion. Chaos theory.
P35, The Allure of Illusion. And so we live a lie; we ignore complexity by believing we know more than we do. Kids asking why. Until they stop asking questions. Later we’ll ask, how can we manage complexity.
Two, Why We Think, p37
Would you want to have a perfect memory? Borges considered this, in “Funes the Memorious.” In 2006 a patient at UC Irvine was similar — she couldn’t forget anything. A condition called hyperthymesia. It’s not that impressive that the brain can store so much information; but why can’t we all do this? Because the brain evolved for different reasons, to solve specific kinds of problems. The patient called her condition a terribly problem.
P40, What Good Is a Brain? What problem did the brain evolve to solve? Animals have brains, plants do not. Animals need to take sophisticated actions. The Venus fly-trap is a sort of exception. Just as a sea sponge does *not* have a brain. And jellyfish have very tiny brains.
P42, The Discerning Brain. Example of Atlantic horseshoe crabs, crawling on the beach in order to mate. And have been doing it for 450my. Its brain is so simple it can be understood. Visual perception is important to it. Why would this evolve? It’s crucial for finding a mate. Which is the most important things to do for evolution. In people facial recognition is a key skill. Not just different people, but the same people in different situations. Example of Danny DeVito. We’re attuned to pick out properties of a face that distinguish it from other faces. Similarly, we recognize a tune even when played in different keys, or with errors.
P47, Fune’s Curse. So remembering everything is in conflict with the need for abstraction. Page 47:
The reason most of us are not hyperthymesics is because it would make us less successful at what we evolved to do. The mind is busy trying to choose actions by picking out the most useful stuff and leaving the rest behind.
There are many reasons why the mind might have evolved, but they all involve our ability to act effectively. We don’t need to store the details; the broad picture is usually all we need. If we had evolved in other environments we might have developed different logic, e.g. to be intuitive about statistics, or to do deductive reasoning.
Three, How We Think, p49
How author Steve and his dog Cassie have differing methods of getting dinner. The dog targets the location, the human the source of its availability. Effects and cause. Pavlov showed that dogs can be trained to respond to stimuli. Later, the idea that any arbitrary association would be learned was undermined. Rats learned meaningful associations, but not arbitrary ones.
P52, Human Reasoning is Causal. We can imagine the state of the world before and after some mechanism to change it. And we understand how causes work. Examples. But other kinds of reasoning are not natural to humans. Math, quantum mechanics, odds of gambling. We evolved to operate in the world. But even this causal thinking is quite limited. Consider examples. We can infer: if A then B, etc. Lobbying a senator. Others don’t seem natural. Examples of ‘affirmation of the consequent.’ We think in terms of causal logic, not propositional logic. Our ability to project our thoughts into the future is a kind of causal reasoning and enables us to engage in long-term projects, like building a kayak. Similarly psychological problems. And solving problems.
P58, Reasoning Forward and Backward. We also reason backward from effects to causes. Figuring out reasons that something happened. It’s more difficult. Yet we’re prone to a certain kind of error in predictive reasoning; examples. We neglect alternative causes. Still, there no evidence that animals do any of this. But crows were able to get food out of a tube…
P62, Storytelling. This is perhaps the most common way we pass causal information to one another. Stories teach us how the world works and how we should behave. They make causal sense of sequences of events. We find stories everywhere. Example of an animated film with a circle and two triangles. Storytelling requires us to build alternative worlds to think about. It’s clearest in science fiction, 65t. Telling stories allows us to consider alternative courses of action. Great discoveries come from counterfactual thought experiments. Galileo. Stories make up our identifies, as individuals and as members of a group. About the past, future, present. The same incident can be told in different ways, e.g. the Boston tea party. Yet stories tend to simplify events.
[[ this is a good summary of the value of storytelling in a positive sense, without considering the problems of applying the tendency to situations where it causes trouble, i.e. the narrative bias; see McRaney ]]
Four: Why We Think What Isn’t So, p69
Example of Angelina Jolie movie Wanted. Our naïve understanding of physics doesn’t correspond to actual physics. You can’t curve bullets after they’ve been fired. Newton’s laws aren’t intuitive. In Hollywood you make money by appealing to people’s intuitions, 70.7. Many everyday things… example of two quarters. Other examples.
P73, Good Enough. We’re not ideal causal thinkers because the world is complex and there are many ways that things change. We don’t know much about a lot of things. We know just enough to get by. We don’t need to know about quantum effects. Only things with the range of daily experience. P74. [[ they don’t quite go far enough to say that our minds *evolved* to have intuitions at ordinary scales, but didn’t need them at other scales. ]] Similarly reasoning about social situations is shallow. That’s why con men are successful.
P75, The Two Causal Reasoners Inside Us. Some is fast; other kinds are more thoughtful. Daniel Kahneman’s book highlights this, though the distinction is thousands of years old. Some answers come immediately. The distinction between intuition and deliberation goes back to Aristotle. And Plato. Passion and reason. Yet intuitions and passions are not the same. Sometimes our intuitive solutions are different than those we arrive at through deliberation. This distinction is there in Hindu traditions as well. The chakras. Intuition can be individual; deliberation can be done with other people. The idea of a social mind.
P80, Intuition, Deliberation, and the Illusion of Explanatory Depth. People think they understand causal systems better than they actually do. There’s a CRT, Cognitive Reflection Test. Included is the familiar question about a baseball and bat that together cost $1.10. Reflective people check their answers. The intuitive answers are wrong. About reflective people, 82-3. Among other things, they show less of an illusion of explanatory depth. They crave detail, they like to explain things. Intuitions overestimate how much we actually know. Real knowledge lies elsewhere…
Five, Thinking with Our Bodies and the World, p85
Cognitive science is about human intelligence; AI is the study of machine intelligence. Early AI focused on individual computers. But it never worked out. Computers could only do step-by-step things. Good old-fashioned AI (GOFAI) assumes the separation of hardware and software. Reflecting the dualist approach of Descartes. It fails as a model of human intelligence. Consider Casey at the Bat. Hearing only part, what will happen next? There are too many options. This is called the frame problem, and it’s not solved. Similarly, when we walk in the forest we’re continually adjusting for uneven surfaces. Computers are fast enough that some robots seem fast, but not enough for GOFAI.
P90, Embodied Intelligence. Championed by Rodney Brooks beginning the 1980s, inspired by biological creatures, which evolve from simpler creatures. Begin by making it good at one very simple task. Example is the Roomba vacuum.
P93, How Humans are Designed. Similarly there was an old-fashioned conception of how humans thought. That we navigate by doing calculations, and build models of the world. But we don’t actually do this. An experiment with how we read showed that most of the page can be nonsense and a reader wouldn’t notice. We think the world makes sense because the world usually does make sense. And that’s why we don’t have to remember every detail about the world. The visual environment is an external memory store. You understand your world without having to constantly look at it. Yet we don’t carry a model of the world around with us in our heads.
P96, The World Is Your Computer. Continuing with baseball: to catch a fly ball, do we do calculations in our head? No. All we do is adjust a gaze at the ball to a particular angle, p97. Another example: navigating through tight spaces. A wheat field. Optic flow. And on the highway, watching the lines along each side. And to enter doorways. Bees and other insects also use optic flow. Again: the world is our memory store.
P101, The Brain Is in the Mind. Where is the mind located? In the brain? Experiment with a photograph of a water can. How arithmetic can be made easier through external props. Like 10 fingers. The brain is just part of a processing system that includes the body and other aspects of the world. Damasio called emotional reactions somatic markers. Thus our responses of disgust and fear. Such emotional reactions can substitute for thought. Some are rooted in our evolutionary history; others, like fear of flying, but arise because it violates our causal beliefs about physics, 104b. Disgust can also arise from psychological matters: homosexuality, say. Bottom line here is that the mind is not simply an information processor. So: the brain is in the mind, not vice versa. (Refs to Damasio and Haidt.)
Six, Thinking with Other People, p107
Summary para. A single thinker can only do so much. Group intelligence can emerge that can go further. Bees are an example. They work like corporations: different individuals play different roles. And they cooperate. People are smarter than bees, but can take on roles and cooperate just as bees can.
P108, The Community Hunt. Survival depends on food; humans have hunted everything, including the largest animals on the planet, to the point of extinction. How did they do this? Hunting was a communal exercise. Example of bison hunt in North American at the end of the last Ice Age, which was carefully planned and executed. And then the dead animals had to be preserved. Similar division of labor enables people to build buildings. The cathedrals took many years as various tradesmen came and went. The mind evolved in the context of group collaboration.
P111, Braininess. The evolution of modern humans from other hominids was extremely rapid on an evolutionary timescale. Homo emerged 2 to 3 mya, modern humans about 200,000 ya. The advance was cognitive. How did this happen, despite the costs? First, individuals were able to deal with the environment; the ecological hypothesis. And/or the ability to pursue shared goals. The social brain hypothesis. The advantages of living in a group creates a snowball effect; new capabilities developed to support group behavior. It turns out that brain size is related to group size. Language follows. Communication enables hunting. And understanding the intentions of other minds.
P115, Shared Intentionality. Humans can share their attention with someone else. And common ground. And then intentionality. Goes back to Vygotsky and others. Examples of children learning from adults. Chimps and apes do not, in the same way. Similarly with gestures. People are built to collaborate. And this enables the ability to store and transmit knowledge from one generation to the next. Cumulative culture. Thus human capabilities are constantly increasing. The idea that social skills are at odds with intelligence belie the deep connection between them.
P118, Modern-Day Teamwork. We see all this in how children interact. They engage in group thinking, games, play. Scientific lab meetings have the same quality. Team approaches. Committees are the norm. At the cutting edge of science, huge teams are required. It took thousands to find the Higgs boson. People naturally divide up cognitive labor. Examples of groups of friends. Partners defer to partner’s expertise.
P121, Confusion at the Frontier. A consequence of this sharing is that there’s no sharp boundaries between the ideas of one person and the next. E.g. the Beatles. Collaborates of a book. Ideas emerge from the group. Sometimes members get confused about who contributing which idea. Similar with knowledge; we feel knowledgeable as long as knowledge is available in the community. Example of glowing rocks. In medicine, any PCP can look up whatever he needs to know…
P125, Designing an Individual for a Community of Mind. Another aspect of a community of knowledge is how information is stored. Your knowledge of a “Sphinx” may just be a placeholder. …
P126, The Benefits and Perils of the Hive Mind. Example of Joan of Arc and how warriors were driven by faith. And how so many things we believe about the modern world we’ve simply been told about. We rely on experts for building everything around us. We draw on them for knowledge about how things work. We depend on the awareness that the knowledge is out there. [[ this strikes me as a key point. ]] This is the flip side of the curse of knowledge; when we know something, we’re shocked if we discover not everyone else does too. This is related to the hindsight bias. The division of cognitive labor is how our world works today. At the same time, we miss out on so many things only known by others. And we’re unaware of how little we understand. Many of society’s problems stem from the illusion that we understand more than we do.
Seven: Thinking with Technology, p131
The internet has become a major player in our lives. We interact less frequently with other people. This has led to despair and dread for some of us, 132m. Musk, Hawking, Gates; Vinge, Kurzweil. Bostrom. Imminent superintelligence? Machines that can design even smarter machines.
P133, Technology as an Extension of Thought. Technology has reinforced the evolution of our species. Irrigation channels made civilization possible. We use tools as if they are parts of our bodies. But technology may now be outstripping us. Machines are unpredictable, party due to complexity, and due to external events, as when our devices update themselves automatically. The internet can try to trick us. We increasingly treat technology like people, storing understanding in the internet. There’s an effect called ‘confusion at the frontier.’ Searching the internet builds our sense of knowledge. It’s like working on a team, where it’s hard to give credit to individuals. People who search WebMD think they know more. But searching the web is not a substitute for years of expertise.
P139, Technology Doe Not (Yet) Share Intentionality. An example of an advanced AI is our GPS mapping software, which continually updates as we travel. But they do only what is programmed. They don’t share human intentions. We don’t collaborate with such machines; we use them. Our machine systems can be very helpful; consider cars, with self-driving cars coming soon. Planes that virtually fly themselves. Humans can become complacent. Disasters can be bigger. Example of an airline stall, of an Air France flight in 2009. Doing whatever your GPS tells you to do. A cruise ship in 1995. …
P146, Real Superintelligence. This is why evil superintelligence is not likely. But people become tools with crowdsourcing apps. At best these are ways to take advantage of expertise in the community. Yelp, Amazon, Reddit, Waze. There are ways besides money to incentivize experts to contribute. Wikipedia. Too many non-experts screw things up. Example from 1907 of opinions about the weight of a fat ox. The average was very close. This sort of thing overcomes individual biases. Other methods are coming. Distributed information. [[ I suspect the wisdom of the crowds is successful only for very common everyday chores. It can just as well amplify everyone’s misapprehensions. I wouldn’t count on it to work about the size of the world. ]]
P150, Predicting the Future. This community of knowledge is the super-intelligence that’s changing the world. Not huge supercomputers. Individually we’re becoming less and less knowledgeable about how things work.
Eight, Thinking About Science, p153
About Ned Ludd, and the protesters who became known as Luddites. They saw technology as threatening their livelihoods and values. The sentiment has endured; some people always look upon science and technology with suspicion. Of course anti-scientific thinking can be dangerous when it goes too far. And so climate change, and James Inhofe and his snowball. Genetic engineering. Beta-carotene and vitamin A. Vaccinations. Examples of opposition.
P156, The Public Understanding of Science. In 1985 Walter Bodmer proposed that opposition to science was driven by lack of understanding. The deficit model. Survey question, 157b, with percentages of those who got it right, in the US. Other countries aren’t much better. There is a correlation between those who get right answers and are favorable toward science. But it’s weak. And attempts to educate people have been ineffective. Example of trying to educate those opposed to vaccines. So perhaps the deficit model is wrong. Attitudes about science are determined by contextual and culture factors…
P160, Committing to the Community. [[ here we’re getting to what I thought the previous book was about ]] Changing beliefs means discarding other beliefs, forsaking our communities, challenging our identifies. [[ precisely ]] Example of a Florida podcaster who grew up in a fundamentalist community. In his 30s he rejected those fundamentalist beliefs, but returned to Christianity anyway. His advice 161b: Find a new faith community. His example shows the power of culture; our beliefs are not our own. [[ this is another way we ‘know’ less than we think. ]] We don’t know enough individually; we have to adopt the positions of those we trust. …Education doesn’t change the consensus of the community. So do we give up on the deficit model?
[[ this can be taken as religious fundamentalism as crippling the ability to understand things that are true. Unless isolated from the groupthink they live in. ]]
P163, Causal Models and Science Understanding. A problem is that science literacy depends on understanding facts, and facts are hard to remember. Whether those surveyed above were right or wrong, they usually know isolated facts. Deeper understanding can come by reasoning with causal models. Sometimes a causal model can lead to a false belief, e.g. how long drugs work depend on how hard you’re working. [[ this is the core of my thesis that crude science fiction projects local models onto larger scales, inappropriately. ]] Opposition to GMOs comes from incorrect models about genes, 165. Contamination of genes! And, they think about GMOs like germs. Or that they’d become like pigs if eating fruit with a pig gene. These ideas are intuitive, but wrong. Other technologies suffer similar misunderstandings: food irradiation, vaccinations.
P169, Filling the Deficit. Smry. Is there any way forward? Experiment concerning global warming; people on the street knew very little. Showing them 400-word primers increased their knowledge and acceptance. So perhaps the deficit model is not dead. (So the two root causes are social factors, and lack of education of underlying mechanisms.)
Nine, Thinking About Politics, p171
Despite the controversy over Obamacare, few people actually understand it. Yet many have strong opinions about it. In general, public opinion is more extreme than people’s understanding justifies. Ukraine on the map. 80% thought there should be mandatory labels on food containing DNA. Such examples tend to reduce people’s credibility. Strong opinions do not mean the opinions are well informed. Bertrand Russell comment. And Socrates. Again, we don’t appreciate how little we know. This is how the community of knowledge can become dangerous. Ignorance reinforces ignorance. Groupthink, 173b. this increases polarization. A sort of herd mentality. Especially in the age of the internet, when it’s easy to find someone to validate your what you already believe. We’re often unaware of being inside such a house of mirrors. Both side thinks the other doesn’t care, doesn’t understand. Examples from history. 174-5. The need for ideological purity. In hindsight, none of those who preached a rigid orthodoxy turned out to be right. They suffered illusions of understanding.
P175, Shattering Illusions. Authors did an experiment asking about political issues: whether they understand the issue, then to explain the policy’s effects. Again, their confidence drops when they can’t explain much of anything. And it reduced the extremity of their position. Having to explain takes one out of their belief system. Another experiment asked people to explain their reasons for supporting some policy. This had no effect on their sense of understanding, or moderating their positions. Another experiment gave people money to support a cause or not. Smry 181.3
P181, Values Versus Consequences. People have values that no amount of discussion will change. Haidt and his ‘moral dumbfounding’; example of the brother and sister who have sex. Why is it disgusting? It just is; they can’t give a reason. Strong moral reactions don’t require reasons. Most people think about abortion the same way, whichever side they’re on. And assisted suicide. As above, people don’t change when asked to provide a causal explanation. Some policies that can be analyzed about consequences are instead cast as moral issues. Health care. It’s easier to speak of sacred values rather than analyze an issue. A standard political ploy. Example of the Iranian nuclear program. Propaganda casts it as a matter of sacred values. How gay marriage shifted in the US. Israel and Palestine, with sacred values on both sides. Sacred values have their place, but they shouldn’t prevent causal reasoning about consequences of social policy.
P187, On Governance and Leadership. So: our political discourse is remarkably shallow. TV shows should do better than show people arguing. We can’t all be expert on everything. So the experts in any area should be consulted when their issues are under discussion. Some see this as elitist. Certainly there are complications. But experts can be checked. Anyone can evaluate the credibility of a given expert. Recall the concentration of wealth at the beginning of the 20th century. How democratic methods were invented… Initiatives and propositions can be hijacked by extremists. Examples, e.g. Proposition 13. Consequences. Experts could have known. But ordinary people had the votes. The compromise is representative democracy. The problem with reducing extremism by asking people to explain themselves is that they resent it, and become less inclined to seek out new information. People don’t like being made to feel incompetent. Strong leaders listen to experts. A mature electorate appreciates a leader who recognizes that the world is complex and hard to understand.
Ten, A New Definition of Smart, p195.
Example of how little any of us actually knows about MLK Jr. Behind every great figure is a movement made up of many people. We tend to think of individuals to simplify complex histories. They become symbols. Similarly, we reduce history to this or that ‘administration.’ We worship heroic figures in Hollywood films. The same bias exists in science and philosophy. Entire fields of study are assigned to one person. But they all built on the work of others. And if they hadn’t made their discoveries, others would have, or others who actually did at about the same time. Example of Mendeleev. Now, Crispr. We oversimplify through hero worship. And tell stories about them.
P201, Intelligence. One of the things we first notice about a person is their intelligence. But success requires more than just intelligence. What is intelligence exactly? Theories break it down into parts. Fluid vs crystallized. Or, language, perception, ability to manipulate spatial images. Another theory has eight components. There’s no common theory.
P203, A Brief History of Intelligence Testing. Psychologists like things that can be measured, so they like test scores. Which test? In 1904 was Binet’s test. Other tests that involve paying attention and thinking generate similar scores. Spearman called this general intelligence. Other things are not related to ‘g’, like cognitive performance in real life. like racetrack betting. Still, ‘g’ is the gold standard. (The most common being IQ.)
P206, Inspiration from the Community of Knowledge. We can rethink this given awareness that knowledge lives in a community. Perhaps intelligence is the degree to which an individual contributes to a group’s success. It’s the team that gets things done. So this entails understanding the perspective of others, taking turns, listening. The group needs a balance of people with different skills, not all high g scores. An analogy is the parts of a car. We care about the car’s performance, not the properties of the parts. It’s not necessary to have the best part in every instance. Giving a team a variety of tests produces a ‘c’ score. Again, what is actually being measured? Things like motivation and satisfaction didn’t predict how well groups did; things like social sensitivity and the proportion of females in the group did. It’s not about intelligence; it’s about how well they work together.
P211, Collective Intelligence and Its Implications. So, venture capitalists back teams, not ideas. They rarely capitalize on their initial ideas. Intelligence is a property of a team. We measure it by evaluating the production of groups. So, do we measure the contributions of each player? Like hockey team players? It’s difficult to put into practice. Forming a team is like planting a seed…it needs an environment to flourish.
Eleven, Making People Smart, p215
Recalling Brazil in the 80s, when inflation required poverty-stricken kids to sell things on the street. They had to do well at arithmetic. Were they better at it than kids in school? Yes, in the parts that mattered. Teachers have known for decades that activity is required, as well as listening to lectures. Much classroom learning is divorced from what students care about. Students study but it doesn’t sink in; this is the illusion of comprehension. Familiarity is not the same as understanding. People recite things without understanding them. So learning, too, requires processing information more deeply than we usually do.
P218, Knowing What You Don’t Know. People also confuse what they know with what experts know, because they can consult them. Education isn’t only about making people intellectually independent. Education is only partly about acquiring new skills and new knowledge, because we’re always going to depend on others to get anything done. Examples of car mechanic and historian of Spain. Education is partly about learning what things you don’t know.
P221, The Community of Knowledge and Science Instruction. Since 2006 a course has been taught at Columbia called Ignorance. One way of doing this is to do the work of a discipline. But most science courses privilege retention of knowledge over how science is done. Science is done by community. Individuals have to trust previous results; there’s no time to verify everything. Science is about justification, via direct observation, by inference, or even authority. That is, trust. Faith that others are telling the truth. Because in science, claims can be checked. (As opposed to religious faith.) If a claim is false, eventually it will be found out. Science is about publishing results, and also about community that evaluates those results. And science is becoming more and more interdisciplinary. Look at the average number of authors on journal articles. The average now is close to 6. (in 2014). We need to evaluate who’s making a claim, and often we need to consult experts. Or consider where it was published. Consider who profits. The media almost always oversimplifies.
P228, Communities of Learning. In the classroom we should teach how to rely on others and figure things out interactively. Example of school assignments to teams to decide how animals live. … The method is an argument for diversity in the classroom. A jigsaw model of solving problems. Other techniques have been tried. Peer education. Individuals should focus on their strengths, teach critical thinking skills. These techniques would help everyone be better consumers, and resistant to the efforts of Russian ‘troll farms’ e.g. Example of an explosion in Louisiana that spread in the news, but which never actually happened.
Twelve, Making Smarter Decisions, p233
About Susan Woodward, a financial economist, who noticed that lenders would offer worse terms to customers who were less informed. She realized almost no one understood financial decisions. Linear functions are easy, but finances often behave in a nonlinear way. As with compound interest. Example. Credit card debt. In 2003 credit card companies were obligated to set minimum payments so that the debt would be paid off in a reasonable amount of time. (Since so many people only pay the minimum.) Mortgage loans…
P237, Explanation Fiends and Foes. Examples of products with new features. A little explanation helps, but people don’t want too much information; they’re ‘explanation foes’. There’s a sweet spot for most people. Some people want to understand everything, and find the idea product: we call them ‘explanation fiends’. Which is better? No right answer. Or, it depends. The market focuses on the foes. Examples. Skin care. The world is complex, and we cannot master all the details.
P240, The Solution is Not More Information. It doesn’t work. Only a quarter of US households can come up with $2000 in 30 days. The median households has savings to last only 3 years in retirement. But financial education packages have gotten nowhere. Perhaps because they put the weight of the decision all on the individual. We need think communally. Examples. The annuity paradox. They’re good ideas, but most consumers don’t understand them. Examples.
P244, The Hive Economy. The economy works because of the hive mind. It depends on community beliefs. What people believe shapes the economy. Example of rai stones. [[ this echoes Harari ]] Details. Recall tulips in Holland. The housing bubble leading to the 2008 crash. Households also rely on cognitive division of labor. …
P247, Nudging Better Decisions. How to help people make wiser choices? Thaler and Sunstein: libertarian paternalism. How we make bad decisions all the time. The idea is ‘nudges.’ The system sets the defaults to wise choices, which can be overridden. Opt out rather than opt in. It’s easier to change the environment rather than the person. To overcome the quirks of human cognition. Lessons:
Lesson 1: Reduce complexity; “Explain like I’m 5”
Lesson 2: Simple Decision Rules: give people simple rules that work most of the time.
Lesson 3: Just-in-Time Education. Example of what to do when getting laid off. Or health decisions when having a new-born.
Lesson 4: Check Your Understanding. Be aware of our tendency to be explanation foes. Be aware of what you don’t know.
Conclusion: Appraising Ignorance and Illusion, p255
People have three reactions to new ideas: dismiss; reject; declare it obvious. So are the ideas in this book self-evident? The ideas have been around a long time, and none defy common sense. Because you think differently when you’re not conscious of them. We’ve seen many examples. Three central themes: ignorance, the illusion of understanding, and the community of knowledge. 256b. Ignorance is inevitable, and illusions have their place.
P257, Is Ignorance to Be Avoided at All Costs? It’s inevitable. The problem is not recognizing it. David Dunning is alarmed that people don’t realize how little they know. People who are not very good at something, or smart, don’t realize it., and think they’re better than they are. This is the Dunning-Kruger effect. And we’re all unskilled in most domains of our lives. You won’t miss what you don’t know about. But ignorance can have costs.
P259, A Saner Community. Intelligence resides in the community; wise leaders take this into account. But leaders can go off the deep end: Jim Jones; David Koresh; Heaven’s Gate. Along with faith (in a leader) must come skepticism. Don’t just be following orders. Society has come far because most people cooperate most of the time.
P261, Appraising Illusion. Should we avoid illusion? Take the red pill? Yet illusion is a pleasure. We live in illusions intentionally, in fictional worlds. Example of Steve’s two daughters. One calibrated, the other less so. We live in illusion by living within a community of knowledge. Illusion helps ambition, as when JFK promised the moon. And failures, like Robert Scott at the South Pole. Others succeeded. Perhaps illusions is necessary for the development human civilization. [[ again, Harari’s point ]] We persevere; we believe we can solve relationship problems. But illusion is not bliss. We can fail due to illusions too. So the two daughters are each fine in their own way.