Subtitled: The Power of Mathematical Thinking.
Second post (first post here) about this fascinating book, an examination of several basic principles (linearity, inference, expectation, regression, and existence) and how they apply to every-day, real world situations, situations that are often misunderstood by ordinary “common sense”. The author, a one-time child genius, is a professor of mathematics at the University of Wisconsin-Madison, and has written for Slate, Wired, and other publications, including an occasional column for Slate.
This second of five parts is about Inference. I’ll try to condense my notes more than I did so in my last post.
Examples include Hebrew scholars who examined the Torah for “equidistant letter sequences” to see if the names of classical rabbis, and their birth and death dates, could be found closer to one another than could be expected by chance. They found significant results! (This led to the ’90s bestseller The Bible Code.) Eventually the flaws in their methodology were uncovered: there are many ways to characterize the names of ancient rabbis, and the one they happened to choose just happened to show results. Other ways did not.
And a classic tale about letters from a Boston stockbroker, who week after week sends you predictions that come true. Invest with him? Of course; he must be a genius!. But what you don’t know is that he started with 1024 letters, and only replied each week to those whose predictions to them came true. By the tenth week, he’s left with 10. You never know about his failed predictions.
Improbable things happen a lot — in large enough samples.
Similarly, scientific studies that probe huge data sets often find significant results — because they have so much data to examine. A spoof paper about an MRI of dead fish. “The more chances you give yourself to be surprised, the higher your threshold for surprise had better be.” Relying too much on ‘significance’ can have unintended consequences: example of a warning about a certain contraceptive pill in Britain, that resulted in tens of thousands of women to stop taking it, resulting in more births (and abortions!) the following year, even though the magnitude of the risk would have affected only a single woman, at most.
If you run enough experiments about anything, even e.g. “harupsicy”, making predictions based on sheep entrails, you will find success, based on the statistical threshold of p-value being 1 in 20, some of the time. If you publish only those success stories — without ever validating them through repeated studies — your theory seems validated.
This is a real issue in biomedical research, and the pressure on academics to publish or perish. In the past couple years there have been meta-studies that have revealed that only small portions of such published studies were replicated. Does this call into question the scientific method? No; it means many of those studies saw results that were actually noise in the data. Solutions? Use ‘confidence intervals’; understand that evidence is not about determining ‘truth’, but about making decisions about what to do next, i.e., do further studies.
And another, adopted by one publication in 2013, is to guarantee a certain number of ‘replication reports’ before they are even done, and publish the results either way.
Final chapter in this section: “Are You There, God? It’s Me, Bayesian Inference”.
Big Data can’t solve everything; no matter how much data you compile, there’s a hard limit about predictions in chaotic situations, e.g. weather predictions (two weeks seems to be the limit) and Netflix recommendations.
The author uses the idea of how the FBI might use Facebook posts to identify potential terrorists to introduce Bayesian inference — a way of deciding how much you believe something after you see the evidence, based on how much you believed it to begin with. This is in fact how people tend to think about things, all the time, because we all have notions about what things are true or not without having personally examined the evidence.
And we change our beliefs based on a combination of prior beliefs, and new evidence, leading to posterior beliefs.
The author applies this to the creation of the universe. He charts the likelihood of humanity existing vs the existence of God. The flaw in this take is that it doesn’t take into account other options: maybe there are many gods, or maybe our existence is a simulation inside some ultracomputer that exists within a larger reality. The question is, given those various assumptions, how likely would it then to be for humanity to exist?
The likeliest, in this analysis, is that we are sims. Second, multiple gods. He cheekily advises how to teach creationism in schools, concluding, “There are even some people who believe that one single God created the universe, but that hypothesis should be considered less strongly supported than the alternatives.”
But his real conclusion is that the type of thinking means we’ve reached a limit to quantitative thinking.
This is not quite half-way through the book.