publication bias

38 results back to index


Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth by Stuart Ritchie

Albert Einstein, anesthesia awareness, autism spectrum disorder, Bayesian statistics, Black Lives Matter, Carmen Reinhart, Cass Sunstein, Charles Babbage, citation needed, Climatic Research Unit, cognitive dissonance, complexity theory, coronavirus, correlation does not imply causation, COVID-19, crowdsourcing, data science, deindustrialization, Donald Trump, double helix, en.wikipedia.org, epigenetics, Estimating the Reproducibility of Psychological Science, fake news, Goodhart's law, Growth in a Time of Debt, Helicobacter pylori, Higgs boson, hype cycle, Kenneth Rogoff, l'esprit de l'escalier, Large Hadron Collider, meta-analysis, microbiome, Milgram experiment, mouse model, New Journalism, ocean acidification, p-value, phenotype, placebo effect, profit motive, publication bias, publish or perish, quantum entanglement, race to the bottom, randomized controlled trial, recommendation engine, rent-seeking, replication crisis, Richard Thaler, risk tolerance, Ronald Reagan, Scientific racism, selection bias, Silicon Valley, Silicon Valley startup, social distancing, Stanford prison experiment, statistical model, stem cell, Steven Pinker, TED Talk, Thomas Bayes, twin studies, Tyler Cowen, University of East Anglia, Wayback Machine

If the medical literature gives doctors an inflated view of how much benefit a drug provides (as indeed appears to have been the case for antidepressants, which do seem to work, but not with as strong an effect as initially believed), their clinical reasoning will be knocked off track.32 If you hadn’t heard of publication bias before now, it would be perfectly understandable: it is one of science’s more embarrassing secrets. But a 2014 survey of reviews in top medical journals found that 31 per cent of meta-analyses didn’t even check for it. (Once it was properly checked for, 19 per cent of those meta-analyses indicated that publication bias was indeed present.)33 A later review of cancer-research reviews was even worse: 72 per cent didn’t include publication bias checks.34 It’s often hard to know exactly what to do when you find hints of publication bias in your meta-analytic dataset – should you revise the estimate of the average effect downwards?

35 – but it’s doubtful that the proper answer is to ignore the issue entirely. The trouble with the archaeological approach to publication bias is that it relies on conjecture to fill in the gaps in the funnel plot – those places where we would expect the small studies with small effects to appear. Funnel plots can have weird shapes for reasons other than publication bias, especially if there are a lot of differences between the assorted studies that go into the meta-analysis.36 There are many cases where publication bias is more subtle, and thus harder to discern, than in those described above. Are there better ways to check for this kind of bias?

There’s a whole set of techniques to adjust the effect size in your meta-analysis when you discover that there’s publication bias. Since these are guesswork (about how much you should reduce the effect size) stacked on guesswork (about how much publication bias there is), I always feel a bit nervous about using them. For details see e.g. Evan C. Carter et al., ‘Correcting for Bias in Psychology: A Comparison of Meta-Analytic Methods’, Advances in Methods and Practices in Psychological Science 2, no. 2 (June 2019): pp. 115–44; https://doi.org/10.1177/2515245919847196 36.  Daniel Cressey, ‘Tool for Detecting Publication Bias Goes under Spotlight’, Nature, 31 March 2017; https://doi.org/10.1038/nature.2017.21728; Richard Morey, ‘Asymmetric Funnel Plots without Publication Bias’, BayesFactor, 9 Jan. 2016; https://bayesfactor.blogspot.com/2016/01/asymmetric-funnel-plots- without.html 37.  


pages: 428 words: 103,544

The Data Detective: Ten Easy Rules to Make Sense of Statistics by Tim Harford

Abraham Wald, access to a mobile phone, Ada Lovelace, affirmative action, algorithmic bias, Automated Insights, banking crisis, basic income, behavioural economics, Black Lives Matter, Black Swan, Bretton Woods, British Empire, business cycle, Cambridge Analytica, Capital in the Twenty-First Century by Thomas Piketty, Cass Sunstein, Charles Babbage, clean water, collapse of Lehman Brothers, contact tracing, coronavirus, correlation does not imply causation, COVID-19, cuban missile crisis, Daniel Kahneman / Amos Tversky, data science, David Attenborough, Diane Coyle, disinformation, Donald Trump, Estimating the Reproducibility of Psychological Science, experimental subject, fake news, financial innovation, Florence Nightingale: pie chart, Gini coefficient, Great Leap Forward, Hans Rosling, high-speed rail, income inequality, Isaac Newton, Jeremy Corbyn, job automation, Kickstarter, life extension, meta-analysis, microcredit, Milgram experiment, moral panic, Netflix Prize, Northpointe / Correctional Offender Management Profiling for Alternative Sanctions, opioid epidemic / opioid crisis, Paul Samuelson, Phillips curve, publication bias, publish or perish, random walk, randomized controlled trial, recommendation engine, replication crisis, Richard Feynman, Richard Thaler, rolodex, Ronald Reagan, selection bias, sentiment analysis, Silicon Valley, sorting algorithm, sparse data, statistical model, stem cell, Stephen Hawking, Steve Bannon, Steven Pinker, survivorship bias, systematic bias, TED Talk, universal basic income, W. E. B. Du Bois, When a measure becomes a target

., “Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy,” New England Journal of Medicine, January 17, 2008, https://www.nejm.org/doi/full/10.1056/NEJMsa065779. 32. Ben Goldacre, “Transparency, Beyond Publication Bias,” talk given at the International Journal of Epidemiology Conference, 2016; available at https://www.badscience.net/2016/10/transparency-beyond-publication-bias-a-video-of-my-super-speedy-talk-at-ije/. 33. Ben Goldacre et al., “COMPare: A Prospective Cohort Study Correcting and Monitoring 58 Misreported Trials in Real Time,” Trials 20, no. 118 (2019), https://doi.org/10.1186/s13063-019-3173-2. 34. Ben Goldacre, “Transparency, Beyond Publication Bias.” 35. Amy Sippett, “Does the Backfire Effect Exist?,” Full Fact (blog), March 20, 2019, https://fullfact.org/blog/2019/mar/does-backfire-effect-exist/; Brendan Nyhan, “Read this!

And the majority who did not might unwittingly commit subtler versions of the same statistical sins. The standard statistical methods are designed to exclude most chance results.19 But a combination of publication bias and loose research practices means we can expect that mixed in with the real discoveries will be a large number of statistical accidents. * * * — Darrell Huff’s How to Lie with Statistics describes how publication bias can be used as a weapon by an amoral corporation more interested in money than truth. With his trademark cynicism, he mentions that a toothpaste maker can truthfully advertise that the toothpaste is wonderfully effective simply by running experiments, putting all unwelcome results “well out of sight somewhere” and waiting until a positive result shows up.20 That is certainly a risk—not only in advertising but also in the clinical trials that underpin potentially lucrative pharmaceutical treatments.

With his trademark cynicism, he mentions that a toothpaste maker can truthfully advertise that the toothpaste is wonderfully effective simply by running experiments, putting all unwelcome results “well out of sight somewhere” and waiting until a positive result shows up.20 That is certainly a risk—not only in advertising but also in the clinical trials that underpin potentially lucrative pharmaceutical treatments. But might accidental publication bias be an even bigger risk than weaponized publication bias? In 2005, John Ioannidis caused a minor sensation with an article titled “Why Most Published Research Findings Are False.” Ioannidis is a “meta-researcher”—someone who researches the nature of research itself.* He reckoned that the cumulative effect of various apparently minor biases might mean that false results could easily outnumber the genuine ones.


pages: 402 words: 129,876

Bad Pharma: How Medicine Is Broken, and How We Can Fix It by Ben Goldacre

behavioural economics, classic study, data acquisition, framing effect, if you build it, they will come, illegal immigration, income per capita, meta-analysis, placebo effect, publication bias, randomized controlled trial, Ronald Reagan, selective serotonin reuptake inhibitor (SSRI), Simon Singh, sugar pill, systematic bias, WikiLeaks

It’s not ideal to lump every study of this type together in one giant spreadsheet, to produce a summary figure on publication bias, because they are all very different, in different fields, with different methods. This is a concern in many meta-analyses (though it shouldn’t be overstated: if there are lots of trials comparing one treatment against placebo, say, and they’re all using the same outcome measurement, then you might be fine just lumping them all in together). But you can reasonably put some of these studies together in groups. The most current systematic review on publication bias, from 2010, from which the examples above are taken, draws together the evidence from various fields.29 Twelve comparable studies follow up conference presentations, and taken together they find that a study with a significant finding is 1.62 times more likely to be published.

In a moment we will see more clear cases of drug companies withholding data – in stories where we can identify individuals – sometimes with the assistance of regulators. When we get to these, I hope your rage might swell. But first, it’s worth taking a moment to recognise that publication bias occurs outside commercial drug development, and in completely unrelated fields of academia, where people are motivated only by reputation, and their own personal interests. In many respects, after all, publication bias is a very human process. If you’ve done a study and it didn’t have an exciting, positive result, then you might wrongly conclude that your experiment isn’t very interesting to other researchers.

Health Technol Assess. 2010 Feb;14(8):iii, ix–xi, 1–193. 23 Dickersin K. How important is publication bias? A synthesis of available data. Aids Educ Prev 1997;9(1 SA):15–21. 24 Ioannidis J. Effect of the statistical significance of results on the time to completion and publication of randomized efficacy trials. JAMA 1998;279:281–6. 25 Bardy AH. Bias in reporting clinical trials. Brit J Clin Pharmaco 1998;46:147–50. 26 Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan AW, Cronin E, et al. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS ONE 2008;3(8):e3081. 27 Decullier E, Lhéritier V, Chapuis F.


pages: 322 words: 107,576

Bad Science by Ben Goldacre

Asperger Syndrome, classic study, confounding variable, correlation does not imply causation, disinformation, Edward Jenner, experimental subject, food desert, hygiene hypothesis, Ignaz Semmelweis: hand washing, John Snow's cholera map, Louis Pasteur, meta-analysis, Nelson Mandela, nocebo, offshore financial centre, p-value, placebo effect, public intellectual, publication bias, Richard Feynman, risk tolerance, Ronald Reagan, selection bias, selective serotonin reuptake inhibitor (SSRI), sugar pill, systematic bias, the scientific method, urban planning

The smaller, more rubbish negative trials seem to be missing, because they were ignored—nobody had anything to lose by letting these tiny, unimpressive trials sit in their bottom drawer—and so only the positive ones were published. Not only has publication bias been demonstrated in many fields of medicine, but a paper has even found evidence of publication bias in studies of publication bias. Here is the funnel plot for that paper. This is what passes for humour in the world of evidence-based medicine. The most heinous recent case of publication bias has been in the area of SSRI antidepressant drugs, as has been shown in various papers. A group of academics published a paper in the New England Journal of Medicine at the beginning of 2008 which listed all the trials on SSRIs which had ever been formally registered with the FDA, and examined the same trials in the academic literature.

They’re not where you get your news from. How can we explain, then, the apparent fact that industry funded trials are so often so glowing? How can all the drugs simultaneously be better than all of the others? The crucial kludge may happen after the trial is finished. Publication bias and suppressing negative results ‘Publication bias’ is a very interesting and very human phenomenon. For a number of reasons, positive trials are more likely to get published than negative ones. It’s easy enough to understand, if you put yourself in the shoes of the researcher. Firstly, when you get a negative result, it feels as if it’s all been a bit of a waste of time.

If you aim too high and get a few rejections, it could be years until your paper comes out, even if you are being diligent: that’s years of people not knowing about your study. Publication bias is common, and in some fields it is more rife than in others. In 1995, only 1 per cent of all articles published in alternative medicine journals gave a negative result. The most recent figure is 5 per cent negative. This is very, very low, although to be fair, it could be worse. A review in 1998 looked at the entire canon of Chinese medical research, and found that not one single negative trial had ever been published. Not one. You can see why I use CAM as a simple teaching tool for evidence-based medicine. Generally the influence of publication bias is more subtle, and you can get a hint that publication bias exists in a field by doing something very clever called a funnel plot.


pages: 467 words: 116,094

I Think You'll Find It's a Bit More Complicated Than That by Ben Goldacre

Aaron Swartz, call centre, conceptual framework, confounding variable, correlation does not imply causation, crowdsourcing, death of newspapers, Desert Island Discs, Dr. Strangelove, drug harm reduction, en.wikipedia.org, experimental subject, Firefox, Flynn Effect, Helicobacter pylori, jimmy wales, John Snow's cholera map, Loebner Prize, meta-analysis, moral panic, nocebo, placebo effect, publication bias, selection bias, selective serotonin reuptake inhibitor (SSRI), seminal paper, Simon Singh, social distancing, statistical model, stem cell, Stephen Fry, sugar pill, the scientific method, Turing test, two and twenty, WikiLeaks

: First, Magnetise Your Wine What Is Science: http://www.badscience.net/2005/12/what-is-science-first-magnetise-your-wine/ BAD ACADEMIA What If Academics Were as Dumb as Quacks with Statistics What if Academics: http://www.badscience.net/2011/10/what-if-academics-were-as-dumb-as-quacks-with-statistics/ publish a mighty torpedo: http://www.nature.com/neuro/journal/v14/n9/full/nn.2886.html Brain-Imaging Studies Report More Positive Findings Than Their Numbers Can Support. This Is Fishy Brain-Imaging Studies: http://www.badscience.net/2011/08/brain-imaging-studies-report-more-positive-findings-than-their-numbers-can-support-this-is-fishy/ publication bias:http://www.badscience.net/category/publication-bias/ took a different approach: http://archpsyc.ama-assn.org/cgi/content/abstract/archgenpsychiatry.2011.28 ‘None of Your Damn Business’ None of Your: http://www.badscience.net/2011/01/none-of-your-damn-business/ 2004 published a study: http://ats.ctsnetjournals.org/cgi/content/abstract/annts;78/4/1433 it was retracted: http://retractionwatch.wordpress.com/2011/01/04/thoracic-surgery-journal-retracts-hypertension-study-marred-by-troubled-data/ Dr L.

But how reliable are the studies? One way of critiquing a piece of research is to read the academic paper itself, in detail, looking for flaws. But that might not be enough, if some sources of bias might exist outside the paper, in the wider system of science. By now you’ll be familiar with publication bias: the phenomenon whereby studies with boring negative results are less likely to get written up, and less likely to get published. Normally you can estimate this using a tool such as, say, a funnel plot. The principle behind these is simple: big, expensive landmark studies are harder to brush under the carpet, but small studies can disappear more easily.

The answer was stark: even being generous, there were twice as many positive findings as you could realistically have expected from the amount of data reported on. What could explain this? Inadequate blinding is an issue: a fair amount of judgement goes into measuring the size of a brain area on a scan, so wishful nudges can creep in. And boring old publication bias is another: maybe whole negative papers aren’t getting published. But a final, more interesting explanation is also possible. In these kinds of studies, it’s possible that many brain areas are measured to see if they’re bigger or smaller, and maybe then only the positive findings get reported within each study.


Statistics in a Nutshell by Sarah Boslaugh

Antoine Gombaud: Chevalier de Méré, Bayesian statistics, business climate, computer age, confounding variable, correlation coefficient, experimental subject, Florence Nightingale: pie chart, income per capita, iterative process, job satisfaction, labor-force participation, linear programming, longitudinal study, meta-analysis, p-value, pattern recognition, placebo effect, probability theory / Blaise Pascal / Pierre de Fermat, publication bias, purchasing power parity, randomized controlled trial, selection bias, six sigma, sparse data, statistical model, systematic bias, The Design of Experiments, the scientific method, Thomas Bayes, Two Sigma, Vilfredo Pareto

A funnel plot with the general shape shown in Figure 20-1 suggests that publication bias is not a large concern in this particular area of research. A funnel plot that looks more like Figure 20-2 does suggest publication bias; about half of the funnel is missing because few studies have been published with a neutral or negative result. The plot alone does not prove publication bias (several other possibilities are discussed in the Cochrane Collaboration document listed in Appendix C), but it does suggest it as a possibility. Figure 20-1. A funnel plot suggesting little to no publication bias Figure 20-2. A funnel plot suggesting publication bias Issues in Research Design Generally, the design of an investigation of a question of interest needs to follow the guidelines presented in Chapter 18 if meaningful inferences are eventually to be made.

Tests should be selected based on known or expected characteristics of the data. Ideally, every result should be reported, even if the study did not find statistical significance. Failure to do so leads to publication bias, in which only significant results are published, creating a misleading picture of our state of knowledge. Don’t be afraid to report deviations, nonsignificant test results, and failure to reject null hypotheses—not every experiment can or should result in a major scientific result! Publication Bias and the Funnel Plot It’s easy to fall into the naïve belief that the published research literature presents a fair picture of our collective knowledge in any research field.

For instance, research published in English might be more readily available than equally good or better research published in other languages and thus more likely to be cited repeatedly by other articles. (The number of citations is sometimes used as a measure of an article’s importance or influence.) One way to evaluate publication bias on a topic is to create a funnel plot, a graph in which each data point represents a published study, with the log odds ratio of the study on the horizontal axis and the standard error of the study on the vertical axis. If there is no publication bias, we expect to see a pattern similar to an inverted funnel, as in Figure 20-1. Note that in studies with a larger standard error (less precise studies), there is a greater variability of results (a wider range of values for the log odds ratio), whereas for more precise studies, the log odds ratio clusters more closely around a single value.


Calling Bullshit: The Art of Scepticism in a Data-Driven World by Jevin D. West, Carl T. Bergstrom

airport security, algorithmic bias, AlphaGo, Amazon Mechanical Turk, Andrew Wiles, Anthropocene, autism spectrum disorder, bitcoin, Charles Babbage, cloud computing, computer vision, content marketing, correlation coefficient, correlation does not imply causation, crowdsourcing, cryptocurrency, data science, deep learning, deepfake, delayed gratification, disinformation, Dmitri Mendeleev, Donald Trump, Elon Musk, epigenetics, Estimating the Reproducibility of Psychological Science, experimental economics, fake news, Ford Model T, Goodhart's law, Helicobacter pylori, Higgs boson, invention of the printing press, John Markoff, Large Hadron Collider, longitudinal study, Lyft, machine translation, meta-analysis, new economy, nowcasting, opioid epidemic / opioid crisis, p-value, Pluto: dwarf planet, publication bias, RAND corporation, randomized controlled trial, replication crisis, ride hailing / ride sharing, Ronald Reagan, selection bias, self-driving car, Silicon Valley, Silicon Valley startup, social graph, Socratic dialogue, Stanford marshmallow experiment, statistical model, stem cell, superintelligent machines, systematic bias, tech bro, TED Talk, the long tail, the scientific method, theory of mind, Tim Cook: Apple, twin studies, Uber and Lyft, Uber for X, uber lyft, When a measure becomes a target

In the case of the Higgs boson, there were already good reasons to expect that the Higgs boson would exist, and its existence was subsequently confirmed. But this is not always the case.*6 The important thing to remember is that a very unlikely hypothesis remains unlikely even after someone obtains experimental results with a very low p-value. P-HACKING AND PUBLICATION BIAS Purely as a matter of convention, we often use a p-value of 0.05 as a cutoff for saying that a result is statistically significant.*7 In other words, a result is statistically significant when p < 0.05, i.e., when it would have less than 5 percent probability of arising due to chance alone.

Thus among US Caucasians, roughly 5 in 6 of those who test positive for Helicobacter are actually carrying it. With that out of the way, let’s come back to Ioannidis. In his paper “Why Most Published Research Findings Are False,” Ioannidis draws the analogy between scientific studies and the interpretation of medical tests. He assumes that because of publication bias, most negative findings go unpublished and the literature comprises mostly positive results. If scientists are testing improbable hypotheses, the majority of positive results will be false positives, just as the majority of tests for Lyme disease, absent other risk factors, will be false positives.

This moves us toward the domain of the Helicobacter pylori example, where the majority of positive results are true positives. Ioannidis is overly pessimistic because he makes unrealistic assumptions about the kinds of hypotheses that researchers decide to test. Of course, this is all theoretical speculation. If we want to actually measure how big of a problem publication bias is, we need to know (1) what fraction of tested hypotheses are actually correct, and (2) what fraction of negative results get published. If both fractions are high, we’ve got little to worry about. If both are very low, we’ve got problems. We’ve argued that scientists will tend to test hypotheses with a decent chance of being correct.


pages: 340 words: 94,464

Randomistas: How Radical Researchers Changed Our World by Andrew Leigh

Albert Einstein, Amazon Mechanical Turk, Anton Chekhov, Atul Gawande, basic income, behavioural economics, Black Swan, correlation does not imply causation, crowdsourcing, data science, David Brooks, Donald Trump, ending welfare as we know it, Estimating the Reproducibility of Psychological Science, experimental economics, Flynn Effect, germ theory of disease, Ignaz Semmelweis: hand washing, Indoor air pollution, Isaac Newton, It's morning again in America, Kickstarter, longitudinal study, loss aversion, Lyft, Marshall McLuhan, meta-analysis, microcredit, Netflix Prize, nudge unit, offshore financial centre, p-value, Paradox of Choice, placebo effect, price mechanism, publication bias, RAND corporation, randomized controlled trial, recommendation engine, Richard Feynman, ride hailing / ride sharing, Robert Metcalfe, Ronald Reagan, Sheryl Sandberg, statistical model, Steven Pinker, sugar pill, TED Talk, uber lyft, universal basic income, War on Poverty

I confess that I’m one of those who is guilty of popularising it without reviewing the follow-up studies: Andrew Leigh, The Economics of Just About Everything, Sydney: Allen & Unwin, 2014, p. 10. 44Benjamin Scheibehenne, Rainer Greifeneder & Peter M. Todd, ‘Can there ever be too many options? A meta-analytic review of choice overload’, Journal of Consumer Research, vol. 37, no. 3, 2010, pp. 409–25. 45Alan Gerber & Neil Malhotra, ‘Publication bias in empirical sociological research’, Sociological Methods & Research, vol. 37, no. 1, 2008, pp. 3–30; Alan Gerber & Neil Malhotra, ‘Do statistical reporting standards affect what is published? Publication bias in two leading political science journals’, Quarterly Journal of Political Science. vol. 3, no. 3, 2008, pp. 313–26; E.J. Masicampo & Daniel R. Lalande, ‘A peculiar prevalence of p values just below .05’, Quarterly Journal of Experimental Psychology, vol. 65, no. 11, 2012, pp. 2271–9; Kewei Hou, Chen Xue & Lu Zhang, ‘Replicating anomalies’, NBER Working Paper 23394, Cambridge, MA: National Bureau of Economic Research, 2017. 46Alexander A.

If researchers conceal findings that run counter to conventional wisdom, then the rest of us may form a mistaken impression of the results of available randomised trials. Like a golfer who takes a mulligan on every hole, discarded trials can leave us in a situation where the scorecard doesn’t reflect reality. One way of countering ‘publication bias’ is to require that studies be registered before they start – by lodging a statement in advance in which the researchers specify the questions they are seeking to answer. This makes it more likely that studies are reported after they finish. In medicine, there are fifteen major clinical trial registers around the world, including ones operated by Australia and New Zealand, China, the European Union, India, Japan, the Netherlands and Thailand.

Olds, David 211 ‘once and done’ campaign, and Smile Train aid charity 158 O’Neill, John, and Black Saturday 2009 13–14 O’Neill, Maura 210 Oportunidades Mexico 117 see also President Vincent Fox Oregon research on health insurance 42 parachute study, and randomised evaluation of 12 Pare, Ambroise, and soldiers’ gunpowder burns 22–3 parenting programs 68–9 and Chicago ‘Parent Academy’ 9 and Incredible Years Basic Parenting Programme 69 and randomised evaluations 70 ‘Triple P’ positive parenting program 68–9 ‘partial equilibrium’ effect 191 Peirce, Charles Sanders 49–51 Perry, Rick 150–1 Perry Preschool 66–8, 71, 169, 191–2 see also David Weikart; Evelyn Moore ‘P-hacking’ 195–6 Piaget, Jean 66 Pinker, Stephen 177 placebo effect 10, 29–31, 34, 138, 192 and John Haygarth 23–4 placebo surgery 18–21 see also sham surgery Planet Money 103 policing programs 91–4, 209 ‘broken windows policing’ 209 and ‘hot spots’ policing 93 and ‘problem oriented policing’ 94 and randomised evaluations 94 see also criminal justice experiments; Lawrence Sherman; Patrick Murphy; Rudi Lammers political campaign strategies and Benin political campaign 160 and control groups 148, 155 and ‘deep canvassing’ 163–4 and Harold Gosnell 148–50 and lobbying in US 162 and online campaigning 154–5 and political speeches 160–1 and ‘robocalls’ 152 and Sierra Leone election debates 161 and use of ‘social pressure’ 151–2 see also Get Out the Vote Pope Benedict XVI 119 ‘power of free’ theory 112 pragmatism 50 see also Charles Sanders Pierce ‘problem oriented policing’ 94 Programme for International Student Assessment 73 Progresa Mexico 117–18 see also President Ernesto Zedillo Project Independence 60–1 see also Ben Graber; Judith Gueron; Manpower Demonstration Research Corporation (MDRC) Project STAR experiment 81 Promise Academy 78–9 Prospera Mexico 118 psychology experiments 50–1, 143, 170, 177, 196 see also Charles Sanders Pierce; Joseph Jastrow ‘publication bias’ 199 Pyrotron 14–15 see also Andrew Sullivan Quintanar, Maricela 38–40 Quora 131 RAND Health Insurance Experiment 41, 169 randomised auditing 174–5 randomised trials see also A/B testing and ‘anchoring’ effect 133 and the book of Daniel 22 and Community Led Sanitation 116 and control groups 13, 67–8, 74, 78, 82 and data collection 171–2 and the driving licence experiment 109 and the ‘experimental idea’ 194 fairness of 37, 100, 177, 185 and ‘fixed mindset’ 6 and ‘general equilibrium’ effect 191 and the ‘gold standard’ 194 and ‘growth mindset’ 6 and ‘healthy cohort’ effect 12 and Highest Paid Person’s Opinion (HiPPO) 6 and Kenyan mini-bus driver experiments 115–16 and ‘natural experiments’ 193 and N-of-1 168–9 and the No Child Left Behind Act 210 and ‘the paradox of choice’ 195 and ‘partial equilibrium’ effect 191 and ‘publication bias’ 199 and replication of 90, 124, 195, 197–8 and sex education 119–20 and single-centre trials 197 and ‘virginity pledges’ in the US 46–7 randomistas, Angus Deaton Nobel laureate on 12 Read India 188 see also Rukmini Banerji Reagan, President Ronald 59, 151 Registry for International Development Impact Evaluations 199 replication 90, 195, 197–8 ‘restorative justice conferencing’ 84 restorative justice experiments 85–6, 182 Results for America 211 Rhinehart, Luke, and The Dice Man 180 Roach, William 52 ‘robocalls’ 152 Romney, Mitt 147 Rossi, Peter 190 ‘Rossi’s Law’ 190, 206 Rothamsted Experimental Centre 53 Rudder, Christian 130 see also OkCupid Sachs, Jeffrey 121 Sackett, David 27, 206 Sacred Heart Mission 36 Salk, Jonas 168 Salvation Army’s ‘Red Kettle Christmas drive 157 Sandburg, Sheryl 144 Saut, Fabiola Vasquez 110 see also Acayucan road experiment ‘scaling proven success,’ and ‘Development Innovation Ventures’ 210 Scared Straight 7–8, 94, 98–9, 189 see also Danny Glover; James Finckenauer Schmidt, Eric, and Google 143 Schwarzenegger, Arnold 75, 173 Science 163 ‘Science of Philanthropy Initiative’ 159 scurvy treatment trials 3–5, 16 see also Gilbert Blane; James Cook; James Lind; William Stark Second Chance Act 210 Seeger, Pete, and ‘The Draft Dodger Rag’ 42 Semelweiss, Ignaz 25 Sesame Street 63–5, 83 see also Joan Cooney sex education 119–20 sham surgery trials 19–20, 182 and ‘clinical equipoise’ 21 Sherman, Lawrence 91–4, 101 ‘Shoes for Better Tomorrows’ (TOMS) 113–15 see also Blake Mycoskie; Bruce Wydick Sierra Leone election debates 161 see also Saa Badabla SimCalc, and online learning tools 77 ‘single subject’ trials 168–9 see also N-of-1 Siroker, Dan 148 Sliding Doors 9 Smile Train aid charity, and ‘once and done’ campaign 158 social experiments large-scale 41 social field experiments and control groups 37, 39–41, 139 and credit card upgrades 132–3 and pay rates 136–7 and retail discounts 133 and ‘split cable’ techniques 139–40 and Western Union money transfers 130 social program trials and Kenyan electricity trial 110 and smoking deterrents 47–8 see also Acayucan road experiment; neighbourhood project social service agencies 36, 69 ‘soft targeting’ 36 ‘split cable’ technique 139–40 St.


pages: 442 words: 94,734

The Art of Statistics: Learning From Data by David Spiegelhalter

Abraham Wald, algorithmic bias, Anthropocene, Antoine Gombaud: Chevalier de Méré, Bayesian statistics, Brexit referendum, Carmen Reinhart, Charles Babbage, complexity theory, computer vision, confounding variable, correlation coefficient, correlation does not imply causation, dark matter, data science, deep learning, DeepMind, Edmond Halley, Estimating the Reproducibility of Psychological Science, government statistician, Gregor Mendel, Hans Rosling, Higgs boson, Kenneth Rogoff, meta-analysis, Nate Silver, Netflix Prize, Northpointe / Correctional Offender Management Profiling for Alternative Sanctions, p-value, placebo effect, probability theory / Blaise Pascal / Pierre de Fermat, publication bias, randomized controlled trial, recommendation engine, replication crisis, self-driving car, seminal paper, sparse data, speech recognition, statistical model, sugar pill, systematic bias, TED Talk, The Design of Experiments, The Signal and the Noise by Nate Silver, The Wisdom of Crowds, Thomas Bayes, Thomas Malthus, Two Sigma

There is nothing in the paper that will reveal the total implausibily of this result – external knowledge is required.7 Publication Bias Scientists examine huge numbers of published articles when they are conducting systematic reviews – trying to bring together the literature and synthesize the current state of knowledge. Such an enterprise becomes hopelessly flawed if what is published is a biased subset of the work that has been carried out, say because negative results have not been submitted for publication, or questionable research practices have led to an unjustified excess of significant results. Statistical techniques have been developed for identifying such publication bias. Suppose we have a set of studies that all set out to test the same null hypothesis that an intervention has no effect.

Then this is just the pattern that would occur were the null hypothesis true, and the only results being reported as significant were those 1 in 20 that tipped over P < 0.05 by luck. Simonsohn and others looked at the published psychological literature which supported the popular idea that giving people an excessive amount of choice led to negative consequences; an analysis of the P-curve suggested there was substantial publication bias and that there was no good evidence for this effect.8 Assessing a Statistical Claim or Story Whether we are journalists, fact-checkers, academics, professionals in government or business or NGOs, or simply members of the public, we are regularly told claims that are based on statistical evidence.

Simonsohn, ‘False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant’, Psychological Science 22:11 (November 2011), 1359–66. 7. A. Gelman and D. Weakliem, ‘Of Beauty, Sex and Power’, American Scientist 97:4 (2009), 310–16. 8. U. Simonsohn, L. D. Nelson and J. P. Simmons, ‘P-Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results’, Perspectives on Psychological Science 9:6 (November 2014), 666–81. 9. For more on intelligent openness, see Royal Society, Science as an Open Enterprise (2012). Onora O’Neill’s perspectives on trustworthiness are brilliantly explained in her TedX talk ‘What We Don’t Understand About Trust’ (June 2013). 10.


pages: 741 words: 199,502

Human Diversity: The Biology of Gender, Race, and Class by Charles Murray

23andMe, affirmative action, Albert Einstein, Alfred Russel Wallace, Asperger Syndrome, assortative mating, autism spectrum disorder, basic income, behavioural economics, bioinformatics, Cass Sunstein, correlation coefficient, CRISPR, Daniel Kahneman / Amos Tversky, dark triade / dark tetrad, domesticated silver fox, double helix, Drosophila, emotional labour, epigenetics, equal pay for equal work, European colonialism, feminist movement, glass ceiling, Gregor Mendel, Gunnar Myrdal, income inequality, Kenneth Arrow, labor-force participation, longitudinal study, meritocracy, meta-analysis, nudge theory, out of africa, p-value, phenotype, public intellectual, publication bias, quantitative hedge fund, randomized controlled trial, Recombinant DNA, replication crisis, Richard Thaler, risk tolerance, school vouchers, Scientific racism, selective serotonin reuptake inhibitor (SSRI), Silicon Valley, Skinner box, social intelligence, Social Justice Warrior, statistical model, Steven Pinker, The Bell Curve by Richard Herrnstein and Charles Murray, the scientific method, The Wealth of Nations by Adam Smith, theory of mind, Thomas Kuhn: the structure of scientific revolutions, twin studies, universal basic income, working-age population

There are several indications that such decisions have been a problem with stereotype threat research: Replications often fail to confirm the earlier results.[36] The evidence for stereotype threat has dissipated over time.37 Publication bias (failure to report negative results) appears to have been a reality.[38] In 2019, scholars at the University of Minnesota dealt with these and other issues in the most comprehensive meta-analysis of stereotype threat to date, focusing on the high-stakes test settings in which stereotype threat should theoretically cause the most problems. For the studies relevant to high-stakes settings, the effect size of stereotype threat was –.14 (lowering test scores), a small effect that was further reduced to –.09 after correcting for publication bias. The authors summarized their findings as follows: Based on the results of the focal analysis, operational and motivational subsets, and publication bias analyses, we conclude that the burden of proof shifts back to those that claim that stereotype threat exerts a substantial effect on standardized test takers.

The authors summarized their findings as follows: Based on the results of the focal analysis, operational and motivational subsets, and publication bias analyses, we conclude that the burden of proof shifts back to those that claim that stereotype threat exerts a substantial effect on standardized test takers. Our best estimate of stereotype threat effects within groups in settings with conditions most similar to operational testing is small and inflated by publication bias.39 Given this assessment from the largest and most rigorous meta-analysis of a quarter century of attempts to demonstrate stereotype threat, it seems unlikely that a significant role for stereotype threat exists.

Because so much of the controversy involves abstruse psychometric issues, I take his conclusion seriously: To conclude, we estimated a small average effect of stereotype threat on the MSSS [math, science, and spatial skills] test-performance of school-aged girls [d = –0.22]; however, the studies show large variation in outcomes, and it is likely that the effect is inflated due to publication bias. This finding leads us to conclude that we should be cautious when interpreting the effects of stereotype threat on children and adolescents in the STEM realm. To be more explicit, based on the small average effect size in our meta-analysis, which is most likely inflated due to publication bias, we would not feel confident to proclaim that stereotype threat manipulations will harm mathematical performance of girls in a systematic way or lead women to stay clear from occupations in the STEM domain.


pages: 147 words: 42,682

Facing Reality: Two Truths About Race in America by Charles Murray

2021 United States Capitol attack, 23andMe, affirmative action, Black Lives Matter, centre right, correlation coefficient, critical race theory, Donald Trump, feminist movement, gentrification, George Floyd, Gunnar Myrdal, income inequality, invention of agriculture, longitudinal study, low skilled workers, medical malpractice, meta-analysis, publication bias, school vouchers, Silicon Valley, The Bell Curve by Richard Herrnstein and Charles Murray, War on Poverty

The former, coauthored by one the world’s most highly regarded quantitative social science methodologists (Jelte Wicherts), concluded that “based on the small average effect size in our meta-analysis, which is most likely inflated due to publication bias, we would not feel confident to proclaim that stereotype manipulations will harm mathematic performance of girls in a systematic way.” (p. 41). The latter article, written by a team of psychologists at the University of Minnesota, concluded, “Based on the result of the focal analysis, operational and motivational subsets, and publication bias analyses, we conclude that the burden of proof shifts back to those that claim that stereotype threat exerts a substantial effect on standardized test takers.”

It was seized upon so uncritically that by 2003, just eight years after its debut, it was already covered in two-thirds of introductory psychology textbooks. Since 2015, its reputation has been battered by a series of failures to replicate the effects seen in early studies and by evidence of “publication bias” – the tendency of scholars to fail to publish negative results. Two of the most rigorous critiques leave little room for the advocates of stereotype threat to make their case: Paulette C. Flore and Jelte M. Wicherts, “Does Stereotype Threat Influence Performance of Girls in Stereotyped Domains?


pages: 284 words: 79,265

The Half-Life of Facts: Why Everything We Know Has an Expiration Date by Samuel Arbesman

Albert Einstein, Alfred Russel Wallace, Amazon Mechanical Turk, Andrew Wiles, Apollo 11, bioinformatics, British Empire, Cesare Marchetti: Marchetti’s constant, Charles Babbage, Chelsea Manning, Clayton Christensen, cognitive bias, cognitive dissonance, conceptual framework, data science, David Brooks, demographic transition, double entry bookkeeping, double helix, Galaxy Zoo, Gregor Mendel, guest worker program, Gödel, Escher, Bach, Ignaz Semmelweis: hand washing, index fund, invention of movable type, Isaac Newton, John Harrison: Longitude, Kevin Kelly, language acquisition, Large Hadron Collider, life extension, Marc Andreessen, meta-analysis, Milgram experiment, National Debt Clock, Nicholas Carr, P = NP, p-value, Paul Erdős, Pluto: dwarf planet, power law, publication bias, randomized controlled trial, Richard Feynman, Rodney Brooks, scientific worldview, SimCity, social contagion, social graph, social web, systematic bias, text mining, the long tail, the scientific method, the strength of weak ties, Thomas Kuhn: the structure of scientific revolutions, Thomas Malthus, Tyler Cowen, Tyler Cowen: Great Stagnation

Increasingly precise measurement allows us to often be more accurate in what we are looking for. And these improvements frequently dial the effects downward. But the decline effect is not only due to measurement. One other factor involves the dissemination of measurements, and it is known as publication bias. Publication bias is the idea that the collective scientific community and the community at large only know what has been published. If there is any sort of systematic bias in what is being published (and therefore publicly measured), then we might only be seeing some of the picture. The clearest example of this is in the world of negative results.

., 174 Godwin’s law, 105 Goldbach’s Conjecture, 112–13 Goodman, Steven, 107–8 Gould, Stephen Jay, 82 grammar: descriptive, 188–89 prescriptive, 188–89, 194 Granovetter, Mark, 76–78 Graves’ disease, 111 Great Vowel Shift, 191–93 Green, George, 105–6 growth: exponential, 10–14, 44–45, 46–47, 54–55, 57, 59, 130, 204 hyperbolic, 59 linear, 10, 11 Gumbel, Bryant, 41 Gutenberg, Johannes, 71–73, 78, 95 Hamblin, Terry, 83 Harrison, John, 102 Hawthorne effect, 55–56 helium, 104 Helmann, John, 162 Henrich, Joseph, 58 hepatitis, 28–30 hidden knowledge, 96–120 h-index, 17 Hirsch, Jorge, 17 History of the Modern Fact, A (Poovey), 200 Holmes, Sherlock, 206 homeoteleuton, 89 Hooke, Robert, 21, 94 Hull, David, 187–88 human anatomy, 23 human computation, 20 hydrogen, 151 hyperbolic growth rate, 59 idiolect, 190 impact factors, 16–17 inattentional blindness (change blindness), 177–79 India, 140–41 informational index funds, 197 information transformation, 43–44, 46 InnoCentive, 96–98, 101, 102 innovation, 204 population size and, 135–37, 202 prizes for, 102–3 simultaneous, 104–5 integrated circuits, 42, 43, 55, 203 Intel Corporation, 42 interdisciplinary research, 68–69 International Bureau of Weights and Measures, 47 Internet, 2, 40–41, 53, 198, 208, 211 Ioannidis, John, 156–61, 162 iPhone, 123 iron: magnetic properties of, 49–50 in spinach, 83–84 Ising, Ernst, 124, 125–26, 138 isotopes, 151 Jackson, John Hughlings, 30 Johnson, Steven, 119 Journal of Physical and Chemical Reference Data, 33–35 journals, 9, 12, 16–17, 32 Kahneman, Daniel, 177 Kay, Alan, 173 Kelly, Kevin, 38, 46 Kelly, Stuart, 115 Kelvin, Lord, 142–43 Kennaway, Kristian, 86 Keynes, John Maynard, 172 kidney stones, 52 kilogram, 147–48 Kiribati, 203 Kissinger, Henry, 190 Kleinberg, Jon, 92–93 knowledge and facts, 5, 54 cumulative, 56–57 erroneous, 78–95, 211–14 half-lives of, 1–8, 202 hidden, 96–120 phase transitions in, 121–39, 185 spread of, 66–95 Koh, Heebyung, 43, 45–46, 56 Kremer, Michael, 58–61 Kuhn, Thomas, 163, 186 Lambton, William, 140 land bridges, 57, 59–60 language, 188–94 French Canadians and, 193–94 grammar and, 188–89, 194 Great Vowel Shift and, 191–93 idiolect and, 190 situation-based dialect and, 190 verbs in, 189 voice onset time and, 190 Large Hadron Collider, 159 Laughlin, Gregory, 129–31 “Laws Underlying the Physics of Everyday Life Really Are Completely Understood, The” (Carroll), 36–37 Lazarus taxa, 27–28 Le Fanu, James, 23 LEGO, 184–85, 194 Lehman, Harvey, 13–14, 15 Leibniz, Gottfried, 67 Lenat, Doug, 112 Levan, Albert, 1–2 Liben-Nowell, David, 92–93 libraries, 31–32 life span, 53–54 Lincoln, Abraham, 70 linear growth, 10, 11 Linnaeus, Carl, 22, 204 Lippincott, Sara, 86 Lipson, Hod, 113 Little Science, Big Science (Price), 13 logistic curves, 44–46, 50, 116, 130, 203–4 longitude, 102 Long Now Foundation, 195 long tails: of discovery, 38 of expertise, 96, 102 of life, 38 of popularity, 103 Lou Gehrig’s disease (ALS), 98, 100–101 machine intelligence, 207 Magee, Chris, 43, 45–46, 56, 207–8 magicians, 178–79 magnetic properties of iron, 49–50 Maldives, 203 Malthus, Thomas, 59 mammal species, 22, 23, 128 extinct, 28 manuscripts, 87–91, 114–16 Marchetti, Cesare, 64 Marsh, Othniel, 80–81, 169 mathematics, 19, 51, 112–14, 124–25, 132–35 Matthew effect, 103 Mauboussin, Michael, 84 Mayor, Michel, 122 McGovern, George, 66 McIntosh, J. S., 81–82 McWhorter, John, 191 measurement, 142–70 decline effect and, 155–56, 157 kilogram in, 147–48 meter in, 143–47 of Mount Everest, 140–41 precision and accuracy in, 149–50 prefixes in, 47–48, 142, 147 publication bias and, 156 of trees, 142 Mechanical Turk, 180–82 medical knowledge, 23, 32, 51–52, 53, 122, 197, 198, 208 about cirrhosis and hepatitis, 28–30 MEDLINE, 99–100 memorization, 198 Mendel, Gregor, 106 Mendeley, 117, 118 Merton, Robert, 61, 103, 104 mesofacts, 6–7, 195, 203 meta-analysis, 107–8 cumulative, 109–10 meter, 143–47 Milgram, Stanley, 24, 167 mobile phone calls, 69, 77 Moon, 2, 126–28, 129, 138, 174, 203 Moore, Gordon, 42, 55, 56 Moore’s Law, 41–43, 46, 48, 51, 55, 56, 64, 203 Moriarty, James, 85–86 Mount Everest, 140–41 Mueller, John, 165 Munroe, Randall, 84, 153–54 Murphy, Tom, 55 mutation, 87–94 Napier’s constant, 12 National Institutes of Health, 17 natural selection, 104–5, 187 Nature, 122, 154, 156, 162, 166 negative results, 162 Neptune, 154–55, 183 network science, 74–78 neuroscience, 48 New Scientist, 85 Newton, Isaac, 21, 36, 67, 94, 174, 186 New Yorker, 86 New York Times, 20, 75, 174 Nobel laureates, 18 nosebleeds, 180–82 Noyce, Robert, 42 null hypothesis, 152 Obama, Barack, 179 Oliver, John, 159 Onnela, Jukka-Pekka, 69, 77 On the Origin of Species (Darwin), 79, 187 opera, 14–15 orders, 60 Original Theory or New Hypothesis of the Universe, An (Wright), 121–22 Pacioli, Luca, 200 paleography, 87–90 paradigm, 186 paradigm shift, 186, 187 Parmentier, Antoine, 102 particle accelerator, 51 Patent Office, 54 Pauly, Daniel, 172–73 Pepys, Samuel, 52 periodic table, 50, 150–52, 182 Petroski, Henry, 49 phase transitions, 207 in acceptance and assimilation of knowledge, 185, 186 in facts, 121–39, 185 Ising model and, 124, 125–26, 138 in physics, 123–24, 126 Philosophical Transactions of the Royal Society of London, 9, 12 physics, 32 Planck, Max, 186–88 planets, 6, 121–23, 128, 129–31, 132, 183–84 Planet X, 154–56, 160 Pluto, 122–23, 128, 138, 148–49, 155, 183–84 polio, 52 Pony Express, 70 Poovey, Mary, 200 Popeye the Sailor, 83, 213 population: innovation and, 135–37, 202 makeup of, 61 size of, 2, 6, 57–61, 122, 135–37, 204 Portugal, 207 posterior probability, 159 potatoes, 102 preferential attachment, 103 prefixes, 47–48, 142, 147 Price, Derek J. de Solla, 9, 12–13, 15, 17, 32, 47, 50, 103, 166–67 prices, 196–97 printing press, 70–74, 78, 115 prior probability, 159 Pritchett, Lant, 186 Prize4Life Foundation, 97–98 productivity, 55–56 programmed cell death, 111, 194 proteomics, 48 Proteus phenomenon, 161 publication bias, 156 p-values, 152–54, 156, 158 P versus NP, 133–35 “Quantitative Measures of the Development of Science” (Price), 12 Quebec, 193–94 Queloz, Didier, 122 radioactivity, 2–3, 29, 33 Raynaud’s syndrome, 99, 110 reading, 197–98 Real Time Statistics Project, 195 reinventions, 104–5 Rendezvous with Rama (Clarke), 19 Rényi, Alfréd, 104 replication, 161–62 Riggs, Elmer, 81 Robinson, Karen, 107–8 robots, 46 Royal Society, 94–95 Roychowdhury, Vwani, 91, 103–4 Russell, C.

., 81–82 McWhorter, John, 191 measurement, 142–70 decline effect and, 155–56, 157 kilogram in, 147–48 meter in, 143–47 of Mount Everest, 140–41 precision and accuracy in, 149–50 prefixes in, 47–48, 142, 147 publication bias and, 156 of trees, 142 Mechanical Turk, 180–82 medical knowledge, 23, 32, 51–52, 53, 122, 197, 198, 208 about cirrhosis and hepatitis, 28–30 MEDLINE, 99–100 memorization, 198 Mendel, Gregor, 106 Mendeley, 117, 118 Merton, Robert, 61, 103, 104 mesofacts, 6–7, 195, 203 meta-analysis, 107–8 cumulative, 109–10 meter, 143–47 Milgram, Stanley, 24, 167 mobile phone calls, 69, 77 Moon, 2, 126–28, 129, 138, 174, 203 Moore, Gordon, 42, 55, 56 Moore’s Law, 41–43, 46, 48, 51, 55, 56, 64, 203 Moriarty, James, 85–86 Mount Everest, 140–41 Mueller, John, 165 Munroe, Randall, 84, 153–54 Murphy, Tom, 55 mutation, 87–94 Napier’s constant, 12 National Institutes of Health, 17 natural selection, 104–5, 187 Nature, 122, 154, 156, 162, 166 negative results, 162 Neptune, 154–55, 183 network science, 74–78 neuroscience, 48 New Scientist, 85 Newton, Isaac, 21, 36, 67, 94, 174, 186 New Yorker, 86 New York Times, 20, 75, 174 Nobel laureates, 18 nosebleeds, 180–82 Noyce, Robert, 42 null hypothesis, 152 Obama, Barack, 179 Oliver, John, 159 Onnela, Jukka-Pekka, 69, 77 On the Origin of Species (Darwin), 79, 187 opera, 14–15 orders, 60 Original Theory or New Hypothesis of the Universe, An (Wright), 121–22 Pacioli, Luca, 200 paleography, 87–90 paradigm, 186 paradigm shift, 186, 187 Parmentier, Antoine, 102 particle accelerator, 51 Patent Office, 54 Pauly, Daniel, 172–73 Pepys, Samuel, 52 periodic table, 50, 150–52, 182 Petroski, Henry, 49 phase transitions, 207 in acceptance and assimilation of knowledge, 185, 186 in facts, 121–39, 185 Ising model and, 124, 125–26, 138 in physics, 123–24, 126 Philosophical Transactions of the Royal Society of London, 9, 12 physics, 32 Planck, Max, 186–88 planets, 6, 121–23, 128, 129–31, 132, 183–84 Planet X, 154–56, 160 Pluto, 122–23, 128, 138, 148–49, 155, 183–84 polio, 52 Pony Express, 70 Poovey, Mary, 200 Popeye the Sailor, 83, 213 population: innovation and, 135–37, 202 makeup of, 61 size of, 2, 6, 57–61, 122, 135–37, 204 Portugal, 207 posterior probability, 159 potatoes, 102 preferential attachment, 103 prefixes, 47–48, 142, 147 Price, Derek J. de Solla, 9, 12–13, 15, 17, 32, 47, 50, 103, 166–67 prices, 196–97 printing press, 70–74, 78, 115 prior probability, 159 Pritchett, Lant, 186 Prize4Life Foundation, 97–98 productivity, 55–56 programmed cell death, 111, 194 proteomics, 48 Proteus phenomenon, 161 publication bias, 156 p-values, 152–54, 156, 158 P versus NP, 133–35 “Quantitative Measures of the Development of Science” (Price), 12 Quebec, 193–94 Queloz, Didier, 122 radioactivity, 2–3, 29, 33 Raynaud’s syndrome, 99, 110 reading, 197–98 Real Time Statistics Project, 195 reinventions, 104–5 Rendezvous with Rama (Clarke), 19 Rényi, Alfréd, 104 replication, 161–62 Riggs, Elmer, 81 Robinson, Karen, 107–8 robots, 46 Royal Society, 94–95 Roychowdhury, Vwani, 91, 103–4 Russell, C.


pages: 404 words: 92,713

The Art of Statistics: How to Learn From Data by David Spiegelhalter

Abraham Wald, algorithmic bias, Antoine Gombaud: Chevalier de Méré, Bayesian statistics, Brexit referendum, Carmen Reinhart, Charles Babbage, complexity theory, computer vision, confounding variable, correlation coefficient, correlation does not imply causation, dark matter, data science, deep learning, DeepMind, Edmond Halley, Estimating the Reproducibility of Psychological Science, government statistician, Gregor Mendel, Hans Rosling, Higgs boson, Kenneth Rogoff, meta-analysis, Nate Silver, Netflix Prize, Northpointe / Correctional Offender Management Profiling for Alternative Sanctions, p-value, placebo effect, probability theory / Blaise Pascal / Pierre de Fermat, publication bias, randomized controlled trial, recommendation engine, replication crisis, self-driving car, seminal paper, sparse data, speech recognition, statistical model, sugar pill, systematic bias, TED Talk, The Design of Experiments, The Signal and the Noise by Nate Silver, The Wisdom of Crowds, Thomas Bayes, Thomas Malthus, Two Sigma

There is nothing in the paper that will reveal the total implausibily of this result—external knowledge is required.7 Publication Bias Scientists examine huge numbers of published articles when they are conducting systematic reviews—trying to bring together the literature and synthesize the current state of knowledge. Such an enterprise becomes hopelessly flawed if what is published is a biased subset of the work that has been carried out, say because negative results have not been submitted for publication, or questionable research practices have led to an unjustified excess of significant results. Statistical techniques have been developed for identifying such publication bias. Suppose we have a set of studies that all set out to test the same null hypothesis that an intervention has no effect.

Then this is just the pattern that would occur were the null hypothesis true, and the only results being reported as significant were those 1 in 20 that tipped over P < 0.05 by luck. Simonsohn and others looked at the published psychological literature which supported the popular idea that giving people an excessive amount of choice led to negative consequences; an analysis of the P-curve suggested there was substantial publication bias and that there was no good evidence for this effect.8 Assessing a Statistical Claim or Story Whether we are journalists, fact-checkers, academics, professionals in government or business or NGOs, or simply members of the public, we are regularly told claims that are based on statistical evidence.

Simonsohn, ‘False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant’, Psychological Science 22:11 (November 2011), 1359–66. 7. A. Gelman and D. Weakliem, ‘Of Beauty, Sex and Power’, American Scientist 97:4 (2009), 310–16. 8. U. Simonsohn, L. D. Nelson and J. P. Simmons, ‘P-Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results’, Perspectives on Psychological Science 9:6 (November 2014), 666–81. 9. For more on intelligent openness, see Royal Society, Science as an Open Enterprise (2012). Onora O’Neill’s perspectives on trustworthiness are brilliantly explained in her TedX talk ‘What We Don’t Understand About Trust’ (June 2013). 10.


pages: 428 words: 126,013

Lost Connections: Uncovering the Real Causes of Depression – and the Unexpected Solutions by Johann Hari

Adam Curtis, autism spectrum disorder, basic income, Berlin Wall, call centre, capitalist realism, correlation does not imply causation, Donald Trump, gig economy, income inequality, Jeff Bezos, John Snow's cholera map, Joi Ito, longitudinal study, meta-analysis, Naomi Klein, Occupy movement, open borders, placebo effect, precariat, publication bias, randomized controlled trial, Rat Park, risk tolerance, Ronald Reagan, Rutger Bregman, selective serotonin reuptake inhibitor (SSRI), Stephen Fry, sugar pill, TED Talk, the scientific method, The Spirit Level, Tipper Gore, twin studies, universal basic income, urban planning, zero-sum game

., “Calculations are correct: reconsidering Fountoulakis & Möller’s re-analysis of the Kirsch data,” International Journal of Neuropsychopharmacology 15, no. 8 (August 2012): 1193–1198, doi: https://doi.org/10.1017/S1461145711001878; Erik Turner et al., “Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy,” N Engl J Med 358 (2008): 252–260, doi: 10.1056/NEJMsa065779. This is called “publication bias.” Evans, Emperor’s New Drugs, 25. My friend Dr. Ben Goldacre has done outstanding work on publication bias. See http://www.badscience.net/category/publication-bias/ for some background. Intrigued, Irving joined Evans, Emperor’s New Drugs, 26–7. Those twenty-seven patients Ibid., 41. “dirty little secret” Ibid., 38. In the end, in court, Ibid., 40; http://web.law.columbia.edu/sites/default/files/microsites/career-services/Driven%20to%20Settle.pdf; http://www.independent.co.uk/news/business/news/drug-firm-settles-seroxat-research-claim-557943.html; http://news.bbc.co.uk/1/hi/business/3631448.stm; http://www.pharmatimes.com/news/gsk_to_pay_$14m_to_settle_paxil_fraud_claims_995307; http://www.nbcnews.com/id/5120989/ns/business-us_business/t/spitzer-sues-glaxosmithkline-over-paxil/; http://study329.org/; http://science.sciencemag.org/content/304/5677/1576.full?

That’s why the drug companies conduct their scientific studies in secret, and afterward, they only publish the results that make their drugs look good, or that make their rivals’ drugs look worse. They do this for exactly the same reasons that (say) KFC would never release information telling you that fried chicken isn’t good for you. This is called “publication bias.”7 Of all the studies drug companies carry out, 40 percent are never released to the public, and lots more are only released selectively, with any negative findings left on the cutting room floor. So, this e-mail explained to Irving, you have, up to now, been looking only at the parts of the scientific studies that the drug companies want us to see.

See psychedelic drugs psychedelic drugs effect of, here percentage of unpleasant experiences, here psychedelic drugs, spiritual experiences caused by 1950s-60s research on, here as escape from ego, here, here, here life-changing effects, here meditation as means of preserving effects of, here Roland’s experiments on, here sense of connection to others following, here, here, here similarity to meditation experience, here, here as treatment for depression, here, here, here psychiatrists and confusion of grief with depression, here focus on biological component of depression, here psychological causes of depression broad range of, here as too often ignored, here See also bio-psycho-social model of depression psychological changes as treatment for depression meditation and, here types of, here See also reconnecting strategies psychotherapy, as treatment for depression, here publication bias, in drug testing for antidepressant, here public engagement as treatment for depression, Kotti neighborhood protest and, here, here, here, here Putnam, Robert, here reactive model of depression vs. endogenous theory, here, here, here impact of research on, here, here research supporting, here, here reconnecting strategies, here author’s successful use of, here large changes required for, here, here as social/psychological antidepressant, here time and confidence needed to implement, here, here See also childhood trauma, overcoming; future, restoring; natural world, reconnection to; people, reconnection to; self/ego, overcoming addiction to; social prescribing; status and respect, reconnection to; values, meaningful, reconnecting to; work, reconnecting to relationships, extrinsic motivations and, here reSTART Life Internet addiction center, here, here Richards, Bill, here, here, here Rumspringa, here Ryan, Richard, here São Paulo, Brazil, banning of outdoor advertising, here Sapirstein, Guy analysis of antidepressant drug testing, here responses to drug testing analysis, here Sapolsky, Robert on genetic factors in depression, here recurring dream of, here research on baboon status hierarchies, here on stress of low or insecure status, here Schwenke, Regina, here Selective Serotonin Reuptake Inhibitors (SSRIs) and chemical imbalance model of depression, here, here effect of, as short-lived, here side effects of, here tests on effectiveness of, here self/ego effect of intrinsic vs. extrinsic motivation on, here experience of nature as escape from, here, here individual as prisoner of, in depression, here as protective barrier, here psychedelic drug experience as escape from, here, here, here resistance to diminishment of in some people, here Western vs.


pages: 321 words: 97,661

How to Read a Paper: The Basics of Evidence-Based Medicine by Trisha Greenhalgh

call centre, complexity theory, conceptual framework, confounding variable, correlation coefficient, correlation does not imply causation, deskilling, knowledge worker, longitudinal study, meta-analysis, microbiome, New Journalism, p-value, personalized medicine, placebo effect, publication bias, randomized controlled trial, selection bias, systematic bias, systems thinking, the scientific method

Remember, too, that the results of an RCT may have limited applicability as a result of exclusion criteria (rules about who may not be entered into the study), inclusion bias (selection of trial participants from a group that is unrepresentative of everyone with the condition (see section ‘Whom is the study about?’)), refusal (or inability) of certain patient groups to give consent to be included in the trial, analysis of only pre-defined ‘objective’ endpoints which may exclude important qualitative aspects of the intervention (see Chapter 12) and publication bias (i.e. the selective publication of positive results, often but not always because the organisation that funded the research stands to gain or lose depending on the findings [9] [10]). Furthermore, RCTs can be well or badly managed [2], and, once published, their results are open to distortion by an over-enthusiastic scientific community or by a public eager for a new wonder drug [13].

The authors report a series of artificial dice-rolling experiments in which red, white and green dice, respectively, represented different therapies for acute stroke. Overall, the ‘trials’ showed no significant benefit from the three therapies. However, the simulation of a number of perfectly plausible events in the process of meta-analysis—such as the exclusion of several of the ‘negative’ trials through publication bias (see section ‘Randomised controlled trials’), a subgroup analysis that excluded data on red dice therapy (because, on looking back at the results, red dice appeared to be harmful), and other, essentially arbitrary, exclusions on the grounds of ‘methodological quality’—led to an apparently highly significant benefit of ‘dice therapy’ in acute stroke.

Eysenck's reservations about meta-analysis are borne out in the infamously discredited meta-analysis that demonstrated (wrongly) that there was significant benefit to be had from giving intravenous magnesium to heart attack victims. A subsequent megatrial involving 58 000 patients (ISIS-4) failed to find any benefit whatsoever, and the meta-analysts' misleading conclusions were subsequently explained in terms of publication bias, methodological weaknesses in the smaller trials and clinical heterogeneity [22] [23]. (Incidentally, for more debate on the pros and cons of meta-analysis versus megatrials, see this recent paper [24].) Eysenck's mathematical naiveté is embarrassing (‘if a medical treatment has an effect so recondite and obscure as to require a meta-analysis to establish it, I would not be happy to have it used on me’), which is perhaps why the editors of the second edition of the ‘Systematic reviews’ book dropped his chapter from their collection.


pages: 250 words: 64,011

Everydata: The Misinformation Hidden in the Little Data You Consume Every Day by John H. Johnson

Affordable Care Act / Obamacare, autism spectrum disorder, Black Swan, business intelligence, Carmen Reinhart, cognitive bias, correlation does not imply causation, Daniel Kahneman / Amos Tversky, data science, Donald Trump, en.wikipedia.org, Kenneth Rogoff, labor-force participation, lake wobegon effect, Long Term Capital Management, Mercator projection, Mercator projection distort size, especially Greenland and Africa, meta-analysis, Nate Silver, obamacare, p-value, PageRank, pattern recognition, publication bias, QR code, randomized controlled trial, risk-adjusted returns, Ronald Reagan, selection bias, statistical model, The Signal and the Noise by Nate Silver, Thomas Bayes, Tim Cook: Apple, wikimedia commons, Yogi Berra

“P-hacking” (named after p-values) is a term used when researchers “collect or select data or statistical analyses until nonsignificant results become significant,” according to a PLoS Biology article.”36 This is similar to cherry picking, as p-hacking researchers simply throw things at the wall until something sticks, metaphorically speaking (although there probably are some scientists who actually throw things at the wall until something sticks…). A fascinating New Yorker article (is there any other kind?) examines publication bias as a possible cause of the “decline effect,” in which the size of a statistically significant effect declines over time. Why? One statistician found that “ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for,” making it perhaps less likely that future studies would be able to replicate these results.37 The Journal of Epidemiology and Community Health published a paper finding no evidence that reduced street lighting at night increased traffic collisions or crime in England and Wales.

See also misrepresentation and misinterpretation brain’s hardwiring for, 60–61 challenges in, 54–55 Ioannidis, John, 75 iPhones, 46–48, 58 “Ipse dixit” bias, 94 J Japan earthquake of 2011, 123–125 Jordan, Michael, 53 Journal of Epidemiology and Community Health, 80 Journal of Finance, 139–140 Journal of Safety Research, 20 Journal of the American Medical Informatics Association, 148 Journal of the National Cancer Institute, 69–70 K Katz, David, 22 Keillor, Garrison, 43 L Lake Wobegon effect, 42–43 Landon, Alfred, 132 Law360, 146–148 Lawyer Satisfaction Survey, 146–148 Literary Digest, 132 longevity, 4, 87–92 Los Angeles Times, 17–18 Lotto Stats, 133 Lund, Bob, 10 M magnitude, 77–78, 81 in birth month and health study, 149 map projections, 83–85 margins of error, 38, 68–69 Marie Claire, 34–35 math mistakes, 101–102, 103 mayors/deputy mayors salaries, 35–36 McCarthy, Jenny, 61 McGwire, Mark, 39 meaning, difficulty of extracting from too much data, 4. See also misrepresentation and misinterpretation means, 32–34 definition of, 32 mean trimming, 40 media cherry-picking by, 116 data interpretation by, 75, 81 publication bias and, 80 medians, 32–34 definition of, 32 medical coding errors, 97 Medical News Today, 75 memory of printed vs. online material, 2 Mercator, Gerardus, 83–85 misrepresentation and misinterpretation, 83–103. See also cherry-picking in charts, 87–92 correlation/causation based on, 58–60 data sources and, 99 errors and, 97–99 of food expiration dates, 99–100 in gas tank gauges, 96–97 guessing and, 86 helpful, 96–97 how to be a smart consumer and, 102–103 math mistakes and, 101–102 in the media, 75, 81 “only” and, 95–96 from treating all data equally, 95 trust in expertise and, 93–94 with visuals, 92–94 models, forecasts based on, 125–127 modes, 32–34 definition of, 32 Moore, Michael, 116 Morton Thiokol, 10 Moz.com, 55 multiple comparison problem, 75–76 N National Bureau of Economic Research, 59, 69 National Cancer Institute, 69–70 National Electronic Injury Surveillance System (NEISS), 18 National Foundation for Celiac Awareness, 21 National Weight Control Registry (NWCR), 17 Natural Resources Defense Council, 100 Nest, 100–101 Newman, Mark, 28–29 New York State Office of the Attorney General, 97 New York Times, 66–67 New York Times Magazine, 101 Nielsen, Arthur, Sr., 25 Nike, 53 NPD Group, 21 NWEA Measures of Academic Progress (MAP), 22–23 O Obama, Barack, 23, 27–30 observations, definition of, 13.

., 58, 76, 135 presidential campaigns/elections averages/aggregates and, 27–30, 44 cherry-picking in, 115–116 forecasting, 132, 137 polls and, 37–38, 68–69, 73 sampling and, 20 terms of office and, 41 Princeton Review of schools, 19 printed material vs. online differences in consumption/interpretation of, 7 willingness to question, 93–94 printed vs. online material memory of, 2 probability, 70–71, 81 coincidence and, 138–139 forecasting and, 131 proxies, 49–50 psychology research, 15–16 publication bias, 80 p-values, 71, 72, 79 Q questions/questioning, 7–8 cherry-picking and, 122 correlation vs. causation, 60 of print vs. online information, 93–94 quote mining, 116 R Radio Television Digital News Association, 36 random chance, multiple comparison problem and, 75–76, 80–81 random samples, 65–68 Rate My Professor, 51–52 Reagan, Ronald, 9 recall of printed vs. online material, 2 Reinhart, Carmen, 97–98 relationships, 5–6.


pages: 128 words: 35,958

Getting Back to Full Employment: A Better Bargain for Working People by Dean Baker, Jared Bernstein

2013 Report for America's Infrastructure - American Society of Civil Engineers - 19 March 2013, Affordable Care Act / Obamacare, Alan Greenspan, American Society of Civil Engineers: Report Card, Asian financial crisis, business cycle, collective bargaining, declining real wages, full employment, George Akerlof, high-speed rail, income inequality, inflation targeting, low interest rates, mass immigration, minimum wage unemployment, new economy, Phillips curve, price stability, publication bias, quantitative easing, Report Card for America’s Infrastructure, rising living standards, selection bias, War on Poverty

[23] In fairness to advocates of inflation targeting, there is a wide range of views as to how strictly we should hold to the target as the primary or only goal of monetary policy. [24] There is also the possibility of publication bias. Given the strong belief by many economists that inflation reduces growth, there may be a reluctance to publish articles that find either insignificant results or even a positive relationship. This sort of publication bias was noted in the case of the minimum wage, where the distribution of published results has an otherwise inexplicable break at zero. If we assume that study results are normally distributed, there should be some number of studies that find a significant positive relationship between higher minimum wages and employment even if the true coefficient for an employment variable is zero (Doucouliagos and Stanley 2009)


Super Thinking: The Big Book of Mental Models by Gabriel Weinberg, Lauren McCann

Abraham Maslow, Abraham Wald, affirmative action, Affordable Care Act / Obamacare, Airbnb, Albert Einstein, anti-pattern, Anton Chekhov, Apollo 13, Apple Newton, autonomous vehicles, bank run, barriers to entry, Bayesian statistics, Bernie Madoff, Bernie Sanders, Black Swan, Broken windows theory, business process, butterfly effect, Cal Newport, Clayton Christensen, cognitive dissonance, commoditize, correlation does not imply causation, crowdsourcing, Daniel Kahneman / Amos Tversky, dark pattern, David Attenborough, delayed gratification, deliberate practice, discounted cash flows, disruptive innovation, Donald Trump, Douglas Hofstadter, Dunning–Kruger effect, Edward Lorenz: Chaos theory, Edward Snowden, effective altruism, Elon Musk, en.wikipedia.org, experimental subject, fake news, fear of failure, feminist movement, Filter Bubble, framing effect, friendly fire, fundamental attribution error, Goodhart's law, Gödel, Escher, Bach, heat death of the universe, hindsight bias, housing crisis, if you see hoof prints, think horses—not zebras, Ignaz Semmelweis: hand washing, illegal immigration, imposter syndrome, incognito mode, income inequality, information asymmetry, Isaac Newton, Jeff Bezos, John Nash: game theory, karōshi / gwarosa / guolaosi, lateral thinking, loss aversion, Louis Pasteur, LuLaRoe, Lyft, mail merge, Mark Zuckerberg, meta-analysis, Metcalfe’s law, Milgram experiment, minimum viable product, moral hazard, mutually assured destruction, Nash equilibrium, Network effects, nocebo, nuclear winter, offshore financial centre, p-value, Paradox of Choice, Parkinson's law, Paul Graham, peak oil, Peter Thiel, phenotype, Pierre-Simon Laplace, placebo effect, Potemkin village, power law, precautionary principle, prediction markets, premature optimization, price anchoring, principal–agent problem, publication bias, recommendation engine, remote working, replication crisis, Richard Feynman, Richard Feynman: Challenger O-ring, Richard Thaler, ride hailing / ride sharing, Robert Metcalfe, Ronald Coase, Ronald Reagan, Salesforce, school choice, Schrödinger's Cat, selection bias, Shai Danziger, side project, Silicon Valley, Silicon Valley startup, speech recognition, statistical model, Steve Jobs, Steve Wozniak, Steven Pinker, Streisand effect, sunk-cost fallacy, survivorship bias, systems thinking, The future is already here, The last Blockbuster video rental store is in Bend, Oregon, The Present Situation in Quantum Mechanics, the scientific method, The Wisdom of Crowds, Thomas Kuhn: the structure of scientific revolutions, Tragedy of the Commons, transaction costs, uber lyft, ultimatum game, uranium enrichment, urban planning, vertical integration, Vilfredo Pareto, warehouse robotics, WarGames: Global Thermonuclear War, When a measure becomes a target, wikimedia commons

In other words, in this set of one hundred studies, the base rate of false positives is likely much larger than 5 percent, and so another large part of the replication crisis can likely be explained as a base rate fallacy. Unfortunately, studies are much, much more likely to be published if they show statistically significant results, which causes publication bias. Studies that fail to find statistically significant results are still scientifically meaningful, but both researchers and publications have a bias against them for a variety of reasons. For example, there are only so many pages in a publication, and given the choice, publications would rather publish studies with significant findings over ones with none.

There are advantages to meta-analyses, as combining data from multiple studies can increase the precision and accuracy of estimates, but they also have their drawbacks. For example, it is problematic to combine data across studies where the designs or sample populations vary too much. They also cannot eliminate biases from the original studies themselves. Further, both systematic reviews and meta-analyses can be compromised by publication bias because they can include only results that are publicly available. Whenever we are looking at the validity of a claim, we first look to see whether a thorough systematic review has been conducted, and if so, we start there. After all, systematic reviews and meta-analyses are commonly used by policy makers in decision making, e.g., in developing medical guidelines.

., 38 oil, 105–6 Olympics, 209, 246–48, 285 O’Neal, Shaquille, 246 one-hundred-year floods, 192 Onion, 211–12 On the Origin of Species by Means of Natural Selection (Darwin), 100 OODA loop, 294–95 openness to experience, 250 Operation Ceasefire, 232 opinion, diversity of, 205, 206 opioids, 36 opportunity cost, 76–77, 80, 83, 179, 182, 188, 305 of capital, 77, 179, 182 optimistic probability bias, 33 optimization, premature, 7 optimums, local and global, 195–96 optionality, preserving, 58–59 Oracle, 231, 291, 299 order, 124 balance between chaos and, 128 organizations: culture in, 107–8, 113, 273–80, 293 size and growth of, 278–79 teams in, see teams ostrich with its head in the sand, 55 out-group bias, 127 outliers, 148 Outliers (Gladwell), 261 overfitting, 10–11 overwork, 82 Paine, Thomas, 221–22 pain relievers, 36, 137 Pampered Chef, 217 Pangea, 24–25 paradigm shift, 24, 289 paradox of choice, 62–63 parallel processing, 96 paranoia, 308, 309, 311 Pareto, Vilfredo, 80 Pareto principle, 80–81 Pariser, Eli, 17 Parkinson, Cyril, 74–75, 89 Parkinson’s law, 89 Parkinson’s Law (Parkinson), 74–75 Parkinson’s law of triviality, 74, 89 passwords, 94, 97 past, 201, 271–72, 309–10 Pasteur, Louis, 26 path dependence, 57–59, 194 path of least resistance, 88 Patton, Bruce, 19 Pauling, Linus, 220 payoff matrix, 212–15, 238 PayPal, 72, 291, 296 peak, 105, 106, 112 peak oil, 105 Penny, Jonathon, 52 pent-up energy, 112 perfect, 89–90 as enemy of the good, 61, 89–90 personality traits, 249–50 person-month, 279 perspective, 11 persuasion, see influence models perverse incentives, 50–51, 54 Peter, Laurence, 256 Peter principle, 256, 257 Peterson, Tom, 108–9 Petrified Forest National Park, 217–18 Pew Research, 53 p-hacking, 169, 172 phishing, 97 phones, 116–17, 290 photography, 302–3, 308–10 physics, x, 114, 194, 293 quantum, 200–201 pick your battles, 238 Pinker, Steven, 144 Pirahã, x Pitbull, 36 pivoting, 295–96, 298–301, 308, 311, 312 placebo, 137 placebo effect, 137 Planck, Max, 24 Playskool, 111 Podesta, John, 97 point of no return, 244 Polaris, 67–68 polarity, 125–26 police, in organizations and projects, 253–54 politics, 70, 104 ads and statements in, 225–26 elections, 206, 218, 233, 241, 271, 293, 299 failure and, 47 influence in, 216 predictions in, 206 polls and surveys, 142–43, 152–54, 160 approval ratings, 152–54, 158 employee engagement, 140, 142 postmortems, 32, 92 Potemkin village, 228–29 potential energy, 112 power, 162 power drills, 296 power law distribution, 80–81 power vacuum, 259–60 practice, deliberate, 260–62, 264, 266 precautionary principle, 59–60 Predictably Irrational (Ariely), 14, 222–23 predictions and forecasts, 132, 173 market for, 205–7 superforecasters and, 206–7 PredictIt, 206 premature optimization, 7 premises, see principles pre-mortems, 92 present bias, 85, 87, 93, 113 preserving optionality, 58–59 pressure point, 112 prices, 188, 231, 299 arbitrage and, 282–83 bait and switch and, 228, 229 inflation in, 179–80, 182–83 loss leader strategy and, 236–37 manufacturer’s suggested retail, 15 monopolies and, 283 principal, 44–45 principal-agent problem, 44–45 principles (premises), 207 first, 4–7, 31, 207 prior, 159 prioritizing, 68 prisoners, 63, 232 prisoner’s dilemma, 212–14, 226, 234–35, 244 privacy, 55 probability, 132, 173, 194 bias, optimistic, 33 conditional, 156 probability distributions, 150, 151 bell curve (normal), 150–52, 153, 163–66, 191 Bernoulli, 152 central limit theorem and, 152–53, 163 fat-tailed, 191 power law, 80–81 sample, 152–53 pro-con lists, 175–78, 185, 189 procrastination, 83–85, 87, 89 product development, 294 product/market fit, 292–96, 302 promotions, 256, 275 proximate cause, 31, 117 proxy endpoint, 137 proxy metric, 139 psychology, 168 Psychology of Science, The (Maslow), 177 Ptolemy, Claudius, 8 publication bias, 170, 173 public goods, 39 punching above your weight, 242 p-values, 164, 165, 167–69, 172 Pygmalion effect, 267–68 Pyrrhus, King, 239 Qualcomm, 231 quantum physics, 200–201 quarantine, 234 questions: now what, 291 what if, 122, 201 why, 32, 33 why now, 291 quick and dirty, 234 quid pro quo, 215 Rabois, Keith, 72, 265 Rachleff, Andy, 285–86, 292–93 radical candor, 263–64 Radical Candor (Scott), 263 radiology, 291 randomized controlled experiment, 136 randomness, 201 rats, 51 Rawls, John, 21 Regan, Ronald, 183 real estate agents, 44–45 recessions, 121–22 reciprocity, 215–16, 220, 222, 229, 289 recommendations, 217 red line, 238 referrals, 217 reframe the problem, 96–97 refugee asylum cases, 144 regression to the mean, 146, 286 regret, 87 regulations, 183–84, 231–32 regulatory capture, 305–7 reinventing the wheel, 92 relationships, 53, 55, 63, 91, 111, 124, 159, 271, 296, 298 being locked into, 305 dating, 8–10, 95 replication crisis, 168–72 Republican Party, 104 reputation, 215 research: meta-analysis of, 172–73 publication bias and, 170, 173 systematic reviews of, 172, 173 see also experiments resonance, 293–94 response bias, 142, 143 responsibility, diffusion of, 259 restaurants, 297 menus at, 14, 62 RetailMeNot, 281 retaliation, 238 returns: diminishing, 81–83 negative, 82–83, 93 reversible decisions, 61–62 revolving door, 306 rewards, 275 Riccio, Jim, 306 rise to the occasion, 268 risk, 43, 46, 90, 288 cost-benefit analysis and, 180 de-risking, 6–7, 10, 294 moral hazard and, 43–45, 47 Road Ahead, The (Gates), 69 Roberts, Jason, 122 Roberts, John, 27 Rogers, Everett, 116 Rogers, William, 31 Rogers Commission Report, 31–33 roles, 256–58, 260, 271, 293 roly-poly toy, 111–12 root cause, 31–33, 234 roulette, 144 Rubicon River, 244 ruinous empathy, 264 Rumsfeld, Donald, 196–97, 247 Rumsfeld’s Rule, 247 Russia, 218, 241 Germany and, 70, 238–39 see also Soviet Union Sacred Heart University (SHU), 217, 218 sacrifice play, 239 Sagan, Carl, 220 sales, 81, 216–17 Salesforce, 299 same-sex marriage, 117, 118 Sample, Steven, 28 sample distribution, 152–53 sample size, 143, 160, 162, 163, 165–68, 172 Sánchez, Ricardo, 234 sanctions and fines, 232 Sanders, Bernie, 70, 182, 293 Sayre, Wallace, 74 Sayre’s law, 74 scarcity, 219, 220 scatter plot, 126 scenario analysis (scenario planning), 198–99, 201–3, 207 schools, see education and schools Schrödinger, Erwin, 200 Schrödinger’s cat, 200 Schultz, Howard, 296 Schwartz, Barry, 62–63 science, 133, 220 cargo cult, 315–16 Scientific Autobiography and other Papers (Planck), 24 scientific evidence, 139 scientific experiments, see experiments scientific method, 101–2, 294 scorched-earth tactics, 243 Scott, Kim, 263 S curves, 117, 120 secondary markets, 281–82 second law of thermodynamics, 124 secrets, 288–90, 292 Securities and Exchange Commission, U.S., 228 security, false sense of, 44 security services, 229 selection, adverse, 46–47 selection bias, 139–40, 143, 170 self-control, 87 self-fulfilling prophecies, 267 self-serving bias, 21, 272 Seligman, Martin, 22 Semmelweis, Ignaz, 25–26 Semmelweis reflex, 26 Seneca, Marcus, 60 sensitivity analysis, 181–82, 185, 188 dynamic, 195 Sequoia Capital, 291 Sessions, Roger, 8 sexual predators, 113 Shakespeare, William, 105 Sheets Energy Strips, 36 Shermer, Michael, 133 Shirky, Clay, 104 Shirky principle, 104, 112 Short History of Nearly Everything, A (Bryson), 50 short-termism, 55–56, 58, 60, 68, 85 side effects, 137 signal and noise, 311 significance, 167 statistical, 164–67, 170 Silicon Valley, 288, 289 simulations, 193–95 simultaneous invention, 291–92 Singapore math, 23–24 Sir David Attenborough, RSS, 35 Skeptics Society, 133 sleep meditation app, 162–68 slippery slope argument, 235 slow (high-concentration) thinking, 30, 33, 70–71 small numbers, law of, 143, 144 smartphones, 117, 290, 309, 310 smoking, 41, 42, 133–34, 139, 173 Snap, 299 Snowden, Edward, 52, 53 social engineering, 97 social equality, 117 social media, 81, 94, 113, 217–19, 241 Facebook, 18, 36, 94, 119, 219, 233, 247, 305, 308 Instagram, 220, 247, 291, 310 YouTube, 220, 291 social networks, 117 Dunbar’s number and, 278 social norms versus market norms, 222–24 social proof, 217–20, 229 societal change, 100–101 software, 56, 57 simulations, 192–94 solitaire, 195 solution space, 97 Somalia, 243 sophomore slump, 145–46 South Korea, 229, 231, 238 Soviet Union: Germany and, 70, 238–39 Gosplan in, 49 in Cold War, 209, 235 space exploration, 209 spacing effect, 262 Spain, 243–44 spam, 37, 161, 192–93, 234 specialists, 252–53 species, 120 spending, 38, 74–75 federal, 75–76 spillover effects, 41, 43 sports, 82–83 baseball, 83, 145–46, 289 football, 226, 243 Olympics, 209, 246–48, 285 Spotify, 299 spreadsheets, 179, 180, 182, 299 Srinivasan, Balaji, 301 standard deviation, 149, 150–51, 154 standard error, 154 standards, 93 Stanford Law School, x Starbucks, 296 startup business idea, 6–7 statistics, 130–32, 146, 173, 289, 297 base rate in, 157, 159, 160 base rate fallacy in, 157, 158, 170 Bayesian, 157–60 confidence intervals in, 154–56, 159 confidence level in, 154, 155, 161 frequentist, 158–60 p-hacking in, 169, 172 p-values in, 164, 165, 167–69, 172 standard deviation in, 149, 150–51, 154 standard error in, 154 statistical significance, 164–67, 170 summary, 146, 147 see also data; experiments; probability distributions Staubach, Roger, 243 Sternberg, Robert, 290 stock and flow diagrams, 192 Stone, Douglas, 19 stop the bleeding, 234 strategy, 107–8 exit, 242–43 loss leader, 236–37 pivoting and, 295–96, 298–301, 308, 311, 312 tactics versus, 256–57 strategy tax, 103–4, 112 Stiglitz, Joseph, 306 straw man, 225–26 Streisand, Barbra, 51 Streisand effect, 51, 52 Stroll, Cliff, 290 Structure of Scientific Revolutions, The (Kuhn), 24 subjective versus objective, in organizational culture, 274 suicide, 218 summary statistics, 146, 147 sunk-cost fallacy, 91 superforecasters, 206–7 Superforecasting (Tetlock), 206–7 super models, viii–xii super thinking, viii–ix, 3, 316, 318 surface area, 122 luck, 122, 124, 128 surgery, 136–37 Surowiecki, James, 203–5 surrogate endpoint, 137 surveys, see polls and surveys survivorship bias, 140–43, 170, 272 sustainable competitive advantage, 283, 285 switching costs, 305 systematic review, 172, 173 systems thinking, 192, 195, 198 tactics, 256–57 Tajfel, Henri, 127 take a step back, 298 Taleb, Nassim Nicholas, 2, 105 talk past each other, 225 Target, 236, 252 target, measurable, 49–50 taxes, 39, 40, 56, 104, 193–94 T cells, 194 teams, 246–48, 275 roles in, 256–58, 260 size of, 278 10x, 248, 249, 255, 260, 273, 280, 294 Tech, 83 technical debt, 56, 57 technologies, 289–90, 295 adoption curves of, 115 adoption life cycles of, 116–17, 129, 289, 290, 311–12 disruptive, 308, 310–11 telephone, 118–19 temperature: body, 146–50 thermostats and, 194 tennis, 2 10,000-Hour Rule, 261 10x individuals, 247–48 10x teams, 248, 249, 255, 260, 273, 280, 294 terrorism, 52, 234 Tesla, Inc., 300–301 testing culture, 50 Tetlock, Philip E., 206–7 Texas sharpshooter fallacy, 136 textbooks, 262 Thaler, Richard, 87 Theranos, 228 thermodynamics, 124 thermostats, 194 Thiel, Peter, 72, 288, 289 thinking: black-and-white, 126–28, 168, 272 convergent, 203 counterfactual, 201, 272, 309–10 critical, 201 divergent, 203 fast (low-concentration), 30, 70–71 gray, 28 inverse, 1–2, 291 lateral, 201 outside the box, 201 slow (high-concentration), 30, 33, 70–71 super, viii–ix, 3, 316, 318 systems, 192, 195, 198 writing and, 316 Thinking, Fast and Slow (Kahneman), 30 third story, 19, 92 thought experiment, 199–201 throwing good money after bad, 91 throwing more money at the problem, 94 tight versus loose, in organizational culture, 274 timeboxing, 75 time: management of, 38 as money, 77 work and, 89 tipping point, 115, 117, 119, 120 tit-for-tat, 214–15 Tōgō Heihachirō, 241 tolerance, 117 tools, 95 too much of a good thing, 60 top idea in your mind, 71, 72 toxic culture, 275 Toys “R” Us, 281 trade-offs, 77–78 traditions, 275 tragedy of the commons, 37–40, 43, 47, 49 transparency, 307 tribalism, 28 Trojan horse, 228 Truman Show, The, 229 Trump, Donald, 15, 206, 293 Trump: The Art of the Deal (Trump and Schwartz), 15 trust, 20, 124, 215, 217 trying too hard, 82 Tsushima, Battle of, 241 Tupperware, 217 TurboTax, 104 Turner, John, 127 turn lemons into lemonade, 121 Tversky, Amos, 9, 90 Twain, Mark, 106 Twitter, 233, 234, 296 two-front wars, 70 type I error, 161 type II error, 161 tyranny of small decisions, 38, 55 Tyson, Mike, 7 Uber, 231, 275, 288, 290 Ulam, Stanislaw, 195 ultimatum game, 224, 244 uncertainty, 2, 132, 173, 180, 182, 185 unforced error, 2, 10, 33 unicorn candidate, 257–58 unintended consequences, 35–36, 53–55, 57, 64–65, 192, 232 Union of Concerned Scientists (UCS), 306 unique value proposition, 211 University of Chicago, 144 unknown knowns, 198, 203 unknowns: known, 197–98 unknown, 196–98, 203 urgency, false, 74 used car market, 46–47 U.S.


pages: 281 words: 79,464

Against Empathy: The Case for Rational Compassion by Paul Bloom

affirmative action, Albert Einstein, An Inconvenient Truth, Asperger Syndrome, Atul Gawande, autism spectrum disorder, classic study, Columbine, David Brooks, Donald Trump, effective altruism, Ferguson, Missouri, Great Leap Forward, impulse control, meta-analysis, mirror neurons, Paul Erdős, period drama, Peter Singer: altruism, public intellectual, publication bias, Ralph Waldo Emerson, replication crisis, Ronald Reagan, social intelligence, Stanford marshmallow experiment, Steven Pinker, theory of mind, Timothy McVeigh, Walter Mischel, Yogi Berra

It turns out, then, that all the empathy measures that are commonly used are actually measures of a cluster of things—including empathy, but also concern and compassion, as well as some traits, such as being cool-headed in an emergency, that might have little to do with empathy in any sense of the term. Finally, when it comes to looking at research concerning the relationship between empathy and good behavior, there is the issue of publication bias. Researchers who study the effects of empathy are typically hoping and expecting that empathy does have effects—nobody does an experiment hoping to find nothing. Studies that fail to find an effect are therefore less likely to be submitted for publication (the so-called file drawer problem), and if such work is submitted, it’s more difficult to get published, because null effects are notoriously uninteresting to reviewers and editors.

(documentary), 50 food aid, 99 football, and violence, 187 foreign aid, 99 forgiveness, 25 Fourth Amendment, 37 Freddie Kruger (character), 180 free speech, 123–26 free trade, 112, 117 free will, 218–19, 221 Freud, Sigmund, 5, 145, 216 friendship, 149–54, 158–59 Fritz, Heidi, 133–35 Gandhi, Mahatma, 159–60 Garner, Eric, 118 Gawande, Atul, 145 gay marriage, 53, 55, 116, 122 Gaza War, 186, 188–89, 190 Gazzaniga, Michael, 220 gender differences, 81, 129, 133–36 objectification, 203–4, 206 genes, 8, 94–95, 154, 169, 195 Ghiselin, Michael, 166 Gladwell, Malcolm, 231–32 Glover, Jonathan, 74, 188 Godwin, Morgan, 202 Godwin’s Law, 63 Goebbels, Joseph, 196 Goodman, Charles, 138 goodness (good actions/behaviors), 41–42, 85–86, 101–6 effective altruism, 102–6, 238–39 empathy-altruism hypothesis, 25, 85–86, 168 high intelligence and, 233 measuring empathy and, 41–42, 77–82 publication bias and measuring empathy, 82–83 Gore, Al, 49–50, 121 Göring, Hermann, 196 Gourevitch, Philip, 93 greed, 188 Greene, Joshua, 10 guilt, 44, 87, 182, 198 gun control, 115, 116, 119, 122–23 gut feelings, 7, 213–14 Habitat for Humanity, 88 Haidt, Jonathan, 6, 120, 223 Haldane, J. B. S., 169 Hamas, 189–90 Hannibal Lecter (character), 180–81 Hare, Robert, 197, 198, 199 Harris, Lasana, 69 Harris, Paul, 174, 175 Harris, Sam, 10, 218 Harris, Thomas, 180–81 Helgeson, Vicki, 133–35 helping others.

., 178 Paul, Laurie, 147–48 Paul, Ron, 118 Personal Concern scale, 80–81 personal distress, 25 Personal Distress scale, 79–81 Perspective Taking scale, 78–81 physicalism, 148 physician-patient relationship, 143–45, 146–47 Pinker, Steven, 10, 18–19, 74–75, 239–40 moralization gap and, 181, 184 self-control and, 234 threshold effect and, 231 Pitkin, Aaron, 46 pity, 40, 100 Plato, 214 poker, 28 Poland, Hitler’s invasion of, 193 police shootings, 4, 19–20, 205 political orientation and language, 114–18 politics, 113–27 free speech and, 123–26 legal context, 125–26 liberal policies and empathy, 113–14, 118–25 rationality and irrationality in, 235–37 pornography, depiction of women in, 203–4 Poulin, Michael, 193–95 prefrontal cortex, 61, 71 presidential election of 2012, 117–18, 119 Prinz, Jesse, 10, 22, 200, 210–11 prison rape, 93 progressives (progressivism), 113–14, 118–27 political orientation and language, 114–18 projective empathy, 70–71, 155 “prosocial concern,” 62 psychoanalysis, 5, 144, 145, 216 psychological egoism, 72–74, 75–76 psychopaths (psychopathy), 42, 197–201 lack of self-control and malicious nature of, 42, 199–201 myth of pure evil, 181, 184 neuroscience of, 47, 71–73 Psychopathy Checklist, 197–201, 198 publication bias, 82–83 punishment, 161, 185, 186, 192, 195–96, 207, 209, 225 purity, 117–18, 224 qualia, and knowledge argument, 148 Rachels, James, 52 racial bias, 226 racism, 9, 48–49, 202–3 Rai, Tage, 184–85, 186 Raine, Adrian, 179 Rand, David, 7 rape, 23, 34, 35, 93, 182, 192, 206 rationality.


pages: 436 words: 123,488

Overdosed America: The Broken Promise of American Medicine by John Abramson

disinformation, germ theory of disease, Herbert Marcuse, Louis Pasteur, medical malpractice, medical residency, meta-analysis, p-value, placebo effect, profit maximization, profit motive, publication bias, RAND corporation, randomized controlled trial, selective serotonin reuptake inhibitor (SSRI), stem cell, tacit knowledge, Thomas Kuhn: the structure of scientific revolutions

Fletcher said that as punishment for publishing this article, the pharmaceutical industry “withdrew many adverts” and showed that it was “willing to flex its considerable muscles when it felt its interests were threatened.” This is a price that medical journal editors would prefer not to pay. NOT TELLING THE WHOLE TRUTH: PUBLICATION BIAS Even if a doctor could keep up with all the studies that were published, he or she would still have a limited and skewed view of the real evidence. Notwithstanding all the potential ways that research can be tipped in favor of a sponsor’s product, clinical trials still tend to reveal the truth about whether a new therapy is effective—or not.

The results of all of the “pivotal” studies (those deemed to be of high enough quality to be used in the FDA’s determinations) for these seven antidepressants were then put together to assess the overall effect of the new drugs. By looking at all the studies, the researchers avoided the distortion of “publication bias” and were able to determine whether or not the scientific evidence really showed that the new antidepressants are more effective and safer than the older ones. When all the evidence is considered, it turns out that the new antidepressant drugs are no more effective than the older tricyclic antidepressants (the classic being amitriptyline, brand name Elavil).

See also medical research absolute vs. relative risk and, 14–16, 165, 166, 229 advertising and research companies and, 109–10 Celebrex and Vioxx research (see Celebrex and Vioxx) cholesterol research (see cholesterol guidelines of 2001) commercial funding, 94–97 (see also drug companies; funding) commercial goals vs. health goals, 21–22, 50–51, 53, 241–44 conflicts of interest and (see conflicts of interest) damage control and, 107–9 data manipulation, 34–36 data omission, 29–31 data transparency and, 27–28, 94, 105–6, 251–52 dosage manipulation, 101–2 failure to compare existing therapies, 17, 102–3 FDA drug approval and Rezulin, 86–88 ghostwriters and, 106–7 hormone replacement therapy (see hormone replacement therapy) implantable defibrillators, 98–101 independent review for, 249–53 medical journals and, 25–27, 37–38, 93–94, 96–97 (see also medical journals) osteoporosis research, 211–20 Paxil research, 243 premature termination of research, 104–5 publication bias as, 113–17 research design changes as, 31 septic shock research, 161–63 stroke research, 13–22 unbiased information vs., 167 unrepresentative patients, 16–17, 33, 103–4, 206–8, 251 commercial speech, 37–38, 157–59 conflicts of interest academic experts, xxii, 18, 243 cholesterol guidelines, 135, 147–48 clinical guideline experts, xxi, 127–28, 133–35, 146–48, 227, 249–50 continuing medical education, 121–23 damage control, 109 FDA, 85–87, 89–90 ghostwriters, 106–7 hormone replacement therapy, 60–61 medical journal, 26 medical news stories, 166–67 NIH researchers, 86–90 independent review and, 258–59 surgeons, 177–78 confounding factors, 66–67 consciousness, 206–8 consulting contracts, 88–90, 109, 249.


pages: 367 words: 97,136

Beyond Diversification: What Every Investor Needs to Know About Asset Allocation by Sebastien Page

Andrei Shleifer, asset allocation, backtesting, Bernie Madoff, bitcoin, Black Swan, Bob Litterman, book value, business cycle, buy and hold, Cal Newport, capital asset pricing model, commodity super cycle, coronavirus, corporate governance, COVID-19, cryptocurrency, currency risk, discounted cash flows, diversification, diversified portfolio, en.wikipedia.org, equity risk premium, Eugene Fama: efficient market hypothesis, fixed income, future of work, Future Shock, G4S, global macro, implied volatility, index fund, information asymmetry, iterative process, loss aversion, low interest rates, market friction, mental accounting, merger arbitrage, oil shock, passive investing, prediction markets, publication bias, quantitative easing, quantitative trading / quantitative finance, random walk, reserve currency, Richard Feynman, Richard Thaler, risk free rate, risk tolerance, risk-adjusted returns, risk/return, Robert Shiller, robo advisor, seminal paper, shareholder value, Sharpe ratio, sovereign wealth fund, stochastic process, stochastic volatility, stocks for the long run, systematic bias, systematic trading, tail risk, transaction costs, TSMC, value at risk, yield curve, zero-coupon bond, zero-sum game

Several other versions of the ARCH model have been proposed to incorporate fat tails, asymmetries in volatility (the fact that volatility spikes up more than down), exponential weights, dynamic correlations, etc. However, Marra shows that for US stocks, most sophisticated models, whether of the historical or ARCH classes, barely outperform the random walk model. The differences in model effectiveness don’t look statistically significant. Other issues with sophisticated models include publication bias (only the good results get published), as well as a related, important issue: the possibility that these models may overfit the in-sample data. It’s hard to argue that one specific model should perform consistently better than to simply extrapolate recent volatility. Aside from a slight advantage for volatility estimates derived from options prices, Poon and Granger find that across 93 academic studies, there’s no clear winner of the great risk forecasting horse race.

Indeed, the strategy appears to work well across risk forecast methodologies, asset classes (stocks, bonds, currencies), factors/risk premiums, regions, and time periods. These results suggest that any asset allocation process can be improved if we incorporate volatility forecasts. But a few caveats apply. Cynics may argue that only backtests that generate interesting results get published (earlier I mentioned publication bias). Authors often make unrealistic assumptions about implementation. For example, they assume portfolio managers can rebalance everything at the closing price of the same day the signal is generated. Worse, some ignore transaction costs altogether. And a more subtle but key caveat is that some strategies do not use budget constraints, such that part of the alpha may come from a systematically long exposure to equity, duration, or other risk premiums versus the static benchmark.

., 13, 25–26 and global equity markets forecasts, 13–14 and inflation, 13 inverse of, and real return for stocks, 12–13 as relative valuation signal, 58–59 and sector weights, 159 as short term timing signal, 57 and valuation change, 30–31 Principles (Dalio), 85 Private assets, 217–229 biases related to, 220–221, 223 diversification with, 128–130 footnotes and fine-print disclaimers with, 219–224 hype associated with, 224–226 in portfolio construction, 217–229 public equities compared to, 218–224 and public equity fund returns, 226–229 “Private Equity Performance” (Kaplan and Schoar), 221 Probability distributions, 117–118, 147, 152–153 Probability-weighted utility, 201 Prout, William, 218–219 Public equities: fund returns for, 226–229 private equities compared to, 218–224 returns on private equities vs., 226–229 Public market equivalent (PME), 221–223, 229 Public pensions, 219 Publication bias, 91–92 Q Group conferences, 7 Qian, Edward, 213–214 Quantitative analysis, judgment and, 84–85 Quantitative data analysis, 2–3 Quantitative easing (QE), 17 Quantitative investing, momentum in models for, 70 Quantitative value-at-risk models, 165 Random walk model, 91 Real estate: CAPM expected returns for, 20 diversification with, 128–130 private, 129–130 Real estate investment trusts (REITS), 18, 240 Real returns, inflation and, 11, 13 “Regime Shifts” (Kritzman, Turkington, and Page), 156, 157 Regime-switching dynamic correlation (RDSC) model, 140 Relative returns: on dashboards, 64–66 and persistence of higher moments, 112, 117–119 on stocks vs. bonds, 10–11, 17–19, 112, 117–119 Relative valuation: and CAPE, 27–28 macro factors confirming signals, 67–68 shorter-term signals of, 56–59 Resampling, 208 Retirement planning, 187–194, 249–253 Return forecasting, 1–3, 83–87, 267, 272 equilibrium, 5–23 momentum, 69–82 paradox of, 73 rules of thumb for, 86–87 shorter-term macro signals, 61–68 shorter-term valuation signals, 45–59 valuation, 25–42 “Return of the Quants” (Dreyer et al.), 94 “The Revenge of the Stock Pickers” (Lynch et al.), 233 Rich, Don, 168–169 Richardson, Matthew, 115 Ringgenberg, Matthew C., 236 Risk aversion, 189, 204 Risk factor diversification, 130–131, 135, 176–177 Risk factors: asset classes vs., 173–184 crowding of, 184 in portfolio construction, 174 in scenario analysis, 162–165 Risk factors models, 178–179 Risk forecasting, 89–92, 267–268, 272–273 basic parameter choices for, 144–145 CAPM definition of, 10 correlation forecasts, 139–143 correlations, 121–136 exposure to loss in, 143–144 fat tails, 147–157 goal of, 178–179 longer-term, 111–119 models of, 89–92 risk-based investing, 93–109 rules of thumb for, 170–171 scenario analysis, 157–168 within-horizon risk in, 168–170 “Risk Management for Hedge Funds” (Lo), 150 Risk parity: and implicit return assumptions, 2 managed volatility vs., 105–106 in portfolio optimization, 212–215 Risk Parity Fundamentals (Qian), 213–214 Risk predictability tests, 112–119 Risk premiums, 179–184 backtest data for, 182–184 beta, 179–180 for bonds, 40 and currency carry trade, 131 diversification across, 182 low-risk anomaly, 180–181 and risk factors, 179–180 and Sharpe ratios, 150, 151 strategies for, 182–184 volatility, 102–104, 181–182 when rates are low, 12 Risk regimes, 131, 154–157, 168, 204 Risk tolerance, 149–150 Risk-based investing, 93–109 combination of strategies for, 104 covered call writing, 102–104 managed volatility backtests, 95–101 Q&A about, 105–109 (See also Managed volatility) Risk-free rate, 11 Roll, Richard, 62, 67 Roll down, 40–41 Ross, Stephen A., 62, 67 Rossi, Marco, 131 Rules of thumb: for portfolio construction, 243–244 for return forecasting, 86–87 for risk forecasting, 170–171 Samonov, Mikhail, 71–75 Sample bias, 223 Samuelson, Paul A., 186–187, 197–198 Sapra, Steve, 132 Satchell, Stephen, 212 Scenario analysis: in asset allocation, 134 and asset class changes over time, 158–162 defensive use of, 157–167 defining scenarios in, 158 factor-based, 162–165 forward-looking scenarios in, 165–168 offensive use of, 167–168 Scherer, Bernd, 2, 117 Schoar, Antoinette, 221, 222 Seasholes, Mark, 31 Sentiment, 69, 131–132 Sharpe, Bill, 6, 7, 9, 13, 151 Sharpe ratios, 150 Sharps, Rob, 228 Shiller, Robert, 13, 14, 25–26 “The Shiller CAPE Ratio” (Siegel), 26 Shive, Sophie, 235 Shkreli, Martin, 238 Shorter-term investments, macro factors for, 63–66 Shorter-term valuation signals, 45–59 for relative valuation between stocks and bonds and across bond markets, 56–59 for tactical asset allocation, 45–59 Shriver, Charles, 57, 62, 94 Siegel, Jeremy, 12–14, 25, 26 Simonato, Jean-Guy, 143 Single-period portfolio optimization, 194–195, 197–215, 268 issues with concentrated and unstable solutions, 207–210 mean-variance optimization, 198, 203–207 and risk parity, 212–215 and usefulness of optimizers, 211–212 Size of measurement errors, 148 Skewness, 118 of call options, 118 mean reversion of, 118–119 persistence of, 117–119 positive vs. negative, 207 and risk forecasting, 144–145 (See also Negative skewness) “Skulls, Financial Turbulence, and Risk Management” (Kritzman and Li), 205 Smart betas, 179, 235 S.M.O.O.T.H. fund, 224–225 Smoothing bias, 128–130 Sovereign wealth funds, 37, 128–130, 194 S&P 500: in March 2018, 12 P/E ratio of, 30–31 realized one-month volatility on, 103–104 recent earnings on, 27 sector weights in, 159 and tech bubble, 163 Spread duration, 40–42 Stock market: used as market portfolio, 17–18 valuation changes in, 31–34 Stock picking, 233–243 “The Stock-Bond Correlation” (Johnson et al.), 132–133 Stocks: beta and relative returns of bonds and, 10–11 CAPM and returns on, 5–14, 20 correlation of bonds and, 132–134 of emerging markets, 159–160 and human capital, 189–190 international equity diversion, 125–126 in market portfolio, 17–19 P/E ratio and real return for, 12–13 P/E ratio vs.


pages: 357 words: 110,072

Trick or Treatment: The Undeniable Facts About Alternative Medicine by Edzard Ernst, Simon Singh

animal electricity, Barry Marshall: ulcers, Berlin Wall, correlation does not imply causation, disinformation, false memory syndrome, Florence Nightingale: pie chart, germ theory of disease, John Snow's cholera map, Louis Pasteur, meta-analysis, microdosing, placebo effect, profit motive, publication bias, randomized controlled trial, Ronald Reagan, Simon Singh, sugar pill, The Design of Experiments, the scientific method

The crude reason for blaming Chinese researchers for the discrepancy is that their results are simply too good to be true. This criticism has been confirmed by careful statistical analyses of all the Chinese results, which demonstrate beyond all reasonable doubt that Chinese researchers are guilty of so-called publication bias. Before explaining the meaning of publication bias, it is important to stress that this is not necessarily a form of deliberate fraud, because it is easy to conceive of situations when it can occur due to an unconscious pressure to get a particular result. Imagine a Chinese researcher who conducts an acupuncture trial and achieves a positive result.

The key point is that this second piece of research might never be published for a whole range of possible reasons: maybe the researcher does not see it as a priority, or he thinks that nobody will be interested in reading about a negative result, or he persuades himself that this second trial must have been badly conducted, or he feels that this latest result would offend his peers. Whatever the reason, the researcher ends up having published the positive results of the first trial, while leaving the negative results of the second trial buried in a drawer. This is publication bias. When this sort of phenomenon is multiplied across China, then we have dozens of published positive trials, and dozens of unpublished negative trials. Therefore, when the WHO conducted a review of the published literature that relied heavily on Chinese research its conclusion was bound to be skewed – such a review could never take into account the unpublished negative trials.


Humble Pi: A Comedy of Maths Errors by Matt Parker

8-hour work day, Affordable Care Act / Obamacare, bitcoin, British Empire, Brownian motion, Chuck Templeton: OpenTable:, collateralized debt obligation, computer age, correlation does not imply causation, crowdsourcing, Donald Trump, fake news, Flash crash, forensic accounting, game design, High speed trading, Julian Assange, millennium bug, Minecraft, Neil Armstrong, null island, obamacare, off-by-one error, orbital mechanics / astrodynamics, publication bias, Richard Feynman, Richard Feynman: Challenger O-ring, selection bias, SQL injection, subprime mortgage crisis, Tacoma Narrows Bridge, Therac-25, value at risk, WikiLeaks, Y2K

When a company runs a drug trial on some new medication or medical intervention they have been working on, they want to show that it performs better than either no intervention or other current options. At the end of a long and expensive trial, if the results show that a drug has no benefit (or a negative one), there is very little motivation for the company to publish that data. It’s a kind of ‘publication bias’. An estimated half of all drug-trial results never get published. A negative result from a drug trial is twice as likely to remain unpublished as a positive result. Withholding any drug-trial data can put people’s lives at risk, possibly more so than any other mistake I’ve mentioned in this book.

The air force tried to get an academic anthropological department from a university involved, but no one was interested. 2 The extra sets of data were made by slowly evolving the data via tiny changes which moved the data points towards a new picture but didn’t change the averages and standard deviations. The software to do this has been made freely available. 3 Their study was finally published thirteen years later, in 1993, as an example of publication bias. 4 In the interest of full disclosure, this is before I was writing for the Guardian myself, but the article was written by my friend Ben Goldacre, of AllTrials fame. Twelve: Tltloay Rodanm 1 At the time of writing, ERNIE is no longer on public display at the Science Museum. 2 It pleases me greatly that part of the required word count of my book has now officially been randomly generated. 3 This was still in the era when the US government controlled the export of software with strong encryption, as they considered such cryptography as munitions.


pages: 270 words: 79,992

The End of Big: How the Internet Makes David the New Goliath by Nicco Mele

4chan, A Declaration of the Independence of Cyberspace, Airbnb, Amazon Web Services, Andy Carvin, Any sufficiently advanced technology is indistinguishable from magic, Apple's 1984 Super Bowl advert, barriers to entry, Berlin Wall, big-box store, bitcoin, bread and circuses, business climate, call centre, Cass Sunstein, centralized clearinghouse, Chelsea Manning, citizen journalism, cloud computing, collaborative consumption, collaborative editing, commoditize, Computer Lib, creative destruction, crony capitalism, cross-subsidies, crowdsourcing, David Brooks, death of newspapers, disruptive innovation, Donald Trump, Douglas Engelbart, Douglas Engelbart, en.wikipedia.org, Evgeny Morozov, Exxon Valdez, Fall of the Berlin Wall, Filter Bubble, Firefox, global supply chain, Google Chrome, Gordon Gekko, Hacker Ethic, Ian Bogost, Jaron Lanier, Jeff Bezos, jimmy wales, John Markoff, John Perry Barlow, Julian Assange, Kevin Kelly, Khan Academy, Kickstarter, Lean Startup, lolcat, machine readable, Mark Zuckerberg, military-industrial complex, minimum viable product, Mitch Kapor, Mohammed Bouazizi, Mother of all demos, Narrative Science, new economy, Occupy movement, off-the-grid, old-boy network, One Laptop per Child (OLPC), peer-to-peer, period drama, Peter Thiel, pirate software, public intellectual, publication bias, Robert Metcalfe, Ronald Reagan, Ronald Reagan: Tear down this wall, satellite internet, Seymour Hersh, sharing economy, Silicon Valley, Skype, social web, Steve Jobs, Steve Wozniak, Stewart Brand, Stuxnet, Ted Nelson, Ted Sorensen, Telecommunications Act of 1996, telemarketer, the Cathedral and the Bazaar, the long tail, The Wisdom of Crowds, transaction costs, uranium enrichment, Whole Earth Catalog, WikiLeaks, Zipcar

Peer-reviewed publication takes on average about two years, and many scientific journals costs thousands of dollars a year for subscriptions. Not only that, but if scientific research fails, it usually does not get written up and published. Who wants to publish an article that says, “we tried this and it didn’t work”? “Publication bias” is a well-known challenge in academia. A major review of more than 4,600 peer-reviewed academic papers across a range of disciplines and a range of countries found that over the last twenty years, positive results increased by almost 25%.26 And yet failure is a crucial part of the scientific process.

wikilang=en&wikifam=.wikipedia.org&grouped=on&page=Abraham_Lincoln 24. http://storify.com/jcstearns/50-years-after-the-vast-wast 25. http://www.nytimes.com/2012/01/17/science/open-science-challenges-journal-tradition-with-web-collaboration.html?pagewanted=all 26. http://www.theatlantic.com/health/archive/2011/10/publication-bias-may-permanently-damage-medical-research/246616/ 27. http://usefulchem.wikispaces.com/ 28. David Weinberger, Too Big to Know: Rethinking Knowledge Now That the Facts Aren’t the Facts, Experts Are Everywhere, and the Smartest Person in the Room Is the Room (New York: Basic Books, 2011), 139. 29.


pages: 312 words: 83,998

Testosterone Rex: Myths of Sex, Science, and Society by Cordelia Fine

"World Economic Forum" Davos, assortative mating, behavioural economics, Cass Sunstein, classic study, confounding variable, credit crunch, Donald Trump, Downton Abbey, Drosophila, epigenetics, experimental economics, gender pay gap, George Akerlof, glass ceiling, helicopter parent, Jeremy Corbyn, longitudinal study, meta-analysis, phenotype, publication bias, risk tolerance, seminal paper

Smaller studies by contrast, being subject to more random error because of their small, idiosyncratic samples, will be scattered over a wider range of effect sizes. Some small studies will greatly overestimate a difference; others will greatly underestimate it (or even “flip” it in the wrong direction). The next part is simple but brilliant. If there isn’t publication bias toward reports of greater male risk taking, these over- and underestimates of the sex difference should be symmetrical around the “true” value indicated by the very large studies. This, with quite a bit of imagination, will make the plot of the data look like an upside-down funnel. (Personally, my vote would have been to call it the candlestick plot, but I wasn’t consulted.)

Meta-analysis of the relationship between digit-ratio 2D:4D and aggression. Personality and Individual Differences, 51(4), 381–386. A small correlation was found for men only (r = –.08 for the left hand and r = –.07 for the right hand), but this reduced to a nonsignificant correlation for r = –.03 after correction for weak publication bias. 56. Voracek et al. (2010), ibid. The authors note the complexity of the biological system thought to underlie sensation seeking, as well as the many psychosocial factors known to influence it, and thus conclude that “Given these knowns, it appears unsurprising that rather simplistic approaches, such as studies only utilizing 2D:4D (a putative, not yet sufficiently validated marker of prenatal testosterone), are prone to be barren of results.”


pages: 338 words: 104,815

Nobody's Fool: Why We Get Taken in and What We Can Do About It by Daniel Simons, Christopher Chabris

Abraham Wald, Airbnb, artificial general intelligence, Bernie Madoff, bitcoin, Bitcoin "FTX", blockchain, Boston Dynamics, butterfly effect, call centre, Carmen Reinhart, Cass Sunstein, ChatGPT, Checklist Manifesto, choice architecture, computer vision, contact tracing, coronavirus, COVID-19, cryptocurrency, DALL-E, data science, disinformation, Donald Trump, Elon Musk, en.wikipedia.org, fake news, false flag, financial thriller, forensic accounting, framing effect, George Akerlof, global pandemic, index fund, information asymmetry, information security, Internet Archive, Jeffrey Epstein, Jim Simons, John von Neumann, Keith Raniere, Kenneth Rogoff, London Whale, lone genius, longitudinal study, loss aversion, Mark Zuckerberg, meta-analysis, moral panic, multilevel marketing, Nelson Mandela, pattern recognition, Pershing Square Capital Management, pets.com, placebo effect, Ponzi scheme, power law, publication bias, randomized controlled trial, replication crisis, risk tolerance, Robert Shiller, Ronald Reagan, Rubik’s Cube, Sam Bankman-Fried, Satoshi Nakamoto, Saturday Night Live, Sharpe ratio, short selling, side hustle, Silicon Valley, Silicon Valley startup, Skype, smart transportation, sovereign wealth fund, statistical model, stem cell, Steve Jobs, sunk-cost fallacy, survivorship bias, systematic bias, TED Talk, transcontinental railway, WikiLeaks, Y2K

A more subtle variant of reporting the same results for different studies is known as “salami slicing,” the act of reporting different outcomes from a single study across multiple papers. For an investigation of this form of potentially deceptive conduct in studies claiming that action video games increase cognitive abilities, see J. Hilgard, G. Sala, W. R. Boot, and D. J. Simons, “Overestimation of Action-Game Training Effects: Publication Bias and Salami Slicing,” Collabra: Psychology 5 (2019): 30 [https://doi.org/10.1525/collabra.231]. 31. Cornell has not released the full results of its investigations, but the provost issued a statement: “Statement of Cornell University Provost Michael I. Kotlikoff,” Cornell University [https://statements.cornell.edu/2018/20180920-statement-provost-michael-kotlikoff.cfm].

Gobet, “Video Game Training Does Not Enhance Cognitive Ability: A Comprehensive Meta-Analytic Investigation,” Psychological Bulletin 144 (2018): 111–139 [https://psycnet.apa.org/doi/10.1037/bul0000139]; J. Hilgard, G. Sala, W. R. Boot, and D. J. Simons, “Overestimation of Action-Game Training Effects: Publication Bias and Salami Slicing,” Collabra: Psychology 5 (2019) [https://doi.org/10.1525/collabra.231]. 28. Original study: D. R. Carney, A. J. Cuddy, and A. J. Yap, “Power Posing: Brief Nonverbal Displays Affect Neuroendocrine Levels and Risk Tolerance,” Psychological Science 21 (2010): 1363–1368. TED talk: Amy Cuddy, “Your Body Language May Shape Who You Are,” YouTube, October 1, 2012 [https://www.ted.com/talks/amy_cuddy_your_body_language_may_shape_who_you_are].


pages: 586 words: 159,901

Wall Street: How It Works And for Whom by Doug Henwood

accounting loophole / creative accounting, activist fund / activist shareholder / activist investor, affirmative action, Alan Greenspan, Andrei Shleifer, asset allocation, asset-backed security, bank run, banking crisis, barriers to entry, bond market vigilante , book value, borderless world, Bretton Woods, British Empire, business cycle, buy the rumour, sell the news, capital asset pricing model, capital controls, Carl Icahn, central bank independence, computerized trading, corporate governance, corporate raider, correlation coefficient, correlation does not imply causation, credit crunch, currency manipulation / currency intervention, currency risk, David Ricardo: comparative advantage, debt deflation, declining real wages, deindustrialization, dematerialisation, disinformation, diversification, diversified portfolio, Donald Trump, equity premium, Eugene Fama: efficient market hypothesis, experimental subject, facts on the ground, financial deregulation, financial engineering, financial innovation, Financial Instability Hypothesis, floating exchange rates, full employment, George Akerlof, George Gilder, Glass-Steagall Act, hiring and firing, Hyman Minsky, implied volatility, index arbitrage, index fund, information asymmetry, interest rate swap, Internet Archive, invisible hand, Irwin Jacobs, Isaac Newton, joint-stock company, Joseph Schumpeter, junk bonds, kremlinology, labor-force participation, late capitalism, law of one price, liberal capitalism, liquidationism / Banker’s doctrine / the Treasury view, London Interbank Offered Rate, long and variable lags, Louis Bachelier, low interest rates, market bubble, Mexican peso crisis / tequila crisis, Michael Milken, microcredit, minimum wage unemployment, money market fund, moral hazard, mortgage debt, mortgage tax deduction, Myron Scholes, oil shock, Paul Samuelson, payday loans, pension reform, planned obsolescence, plutocrats, Post-Keynesian economics, price mechanism, price stability, prisoner's dilemma, profit maximization, proprietary trading, publication bias, Ralph Nader, random walk, reserve currency, Richard Thaler, risk tolerance, Robert Gordon, Robert Shiller, Savings and loan crisis, selection bias, shareholder value, short selling, Slavoj Žižek, South Sea Bubble, stock buybacks, The inhabitant of London could order by telephone, sipping his morning tea in bed, the various products of the whole earth, The Market for Lemons, The Nature of the Firm, The Predators' Ball, The Wealth of Nations by Adam Smith, transaction costs, transcontinental railway, women in the workforce, yield curve, zero-coupon bond

They paused for a few pages in the middle of their book. Myth and Measurement, to review some reasons why the academic literature has almost unanimously found the minimum wage guilty as charged. They surmised that earlier studies showing that higher wages reduced employment were the result of "publication bias" among journal editors. They also surmised, very diplomatically, that economists have been aware of this bias, and played those notorious scholarly games, "specification searching and data mining" — bending the numbers to obtain the desired result. They also noted that some of the early studies were based on seriously flawed data, but since the results were desirable from both the political and professional points of view, they went undiscovered for several years.

See money managers portfolio vs. direct investment, 109 Post Keynesian Thought (PKT) computer network, 243 post-Keynesianism, 217-224 defined, 241-242 see also money, endogenous postmodernism. 237, 245 present value, 119-120 priest, banker's advice more useful than, 225 primitive accumulation, 252 prisoners' dilemma, 171, 183 Pritzker family, 271 private placements, 75 privatization, 110 of economic statistics, I36 returning capital flight and, 295 Social Security, 303-307 production, 241 socialization of, 240 productivity, 299 failure to boom in 1980s, 183 profit(s) maximization Galbraith on, 259 Herman on, 260 and modern corporation, 254 transformation into interest, 73-74, 238 Progressive Era, 94 property relations and social investing, 314- 315 prostitutes. Wail Streeters as customers, 79 protectionism, 295, 300 Proudhonism, 301 The Prudential, 262 investigations of, 304 psychoanalysis, 315 psychology and stock prices, 176-178; see also Keynes, John Maynard; money, psychology of public goods, 143 public relations, ll6 publication bias, 141 Pujo Committee, 260 Pulitzer Prize, 298 puritans of finance, 196 puts, 30; see also derivatives q ratio and capital expenditures, 145-148 and LBOs, 283 and M&A, 148, 284, 299 as stock market predictors, 148 Quan, Tracy, 79 race financial workers, 78 and wealth distribution, 69-70 racism, 98 among goldbugs, 48 Keynes's, 212 railroads and modern corporation, 188 Rainforest Crunch, 313 Rand, Ayn, 47, 89 random walk, 164 Rathenau, Walther, 256 rational expectations, l6l; see fl&o efficient market theory rationality, assumption of, 175 Ravenscraft, David, 279, 283-284 Reagan, Ronald, 87 real estate, 80 real sector predicting the financial, 125-126; see also business cycles Reconstruction Finance Corp., 286 reform, financial, difficulty of, 302 Regan, Edward, 27 regulation, government, overview, 90-99 Reich, Robert, 131 Relational Investors, 289 relationship investing, 293 religion banking and, 225 and belief in markets, 150 monetarism as, 242 and money, 225 restrictions on usury, 42 rentiers apologists, 293 appropriation of worker savings, 239 capture of Clinton administration, 134 consciousness, 237, 238; see also money, psychology of corporate cash flow share, 73-74 dominance of political discourse, 294 early 1990s riot, 288-291 euthanasia of, 210 evolutionary role, 8 formation through financial markets, 238 growing assertiveness, 207 proliferate over time, 215, 236 who needs them?


pages: 173 words: 14,313

Peers, Pirates, and Persuasion: Rhetoric in the Peer-To-Peer Debates by John Logie

1960s counterculture, Berlin Wall, book scanning, cuban missile crisis, dual-use technology, Fall of the Berlin Wall, Free Software Foundation, Hacker Ethic, Isaac Newton, Marshall McLuhan, moral panic, mutually assured destruction, peer-to-peer, plutocrats, pre–internet, publication bias, Richard Stallman, Search for Extraterrestrial Intelligence, search inside the book, SETI@home, Silicon Valley, slashdot, Steve Jobs, Steven Levy, Stewart Brand, Whole Earth Catalog

., copyrights and patents) are offered by the people, via Congress, and for the people, as an incentive for further production from authors and inventors. This represents a subtle but significant break from a broader European tradition in which the so-called “natural rights” of the author or inventor function as the bases for intellectual property protections. The 1991 Supreme Court’s ringing endorsement of copyright’s inherent public bias in the Feist case (once again: “The primary objective of copyright is not to reward the labor of authors, but ‘[t]o promote the Progress of Science and useful Arts.’”) almost certainly emboldened Robertson as he set about developing the my.mp3.com service. Robertson even agreed with the RIAA that Napster was enabling piracy.


pages: 231 words: 69,673

How Cycling Can Save the World by Peter Walker

active transport: walking or cycling, bike sharing, Boris Johnson, car-free, correlation does not imply causation, Crossrail, Donald Shoup, driverless car, Enrique Peñalosa, fixed-gear, gentrification, Intergovernmental Panel on Climate Change (IPCC), Ken Thompson, Kickstarter, meta-analysis, New Journalism, New Urbanism, post-work, publication bias, safety bicycle, Sidewalk Labs, Stop de Kindermoord, TED Talk, the built environment, traffic fines, Traffic in Towns by Colin Buchanan, transit-oriented development, urban planning

CHAPTER 7 1 Michael Polhamus, “Bill Would Require Neon Clothes, Government ID for Cyclists,” Jackson Hole News and Guide, January 30, 2015, http://www.jhnewsandguide.com/jackson_hole_daily/local/bill-would-require-neon-clothes-government-id-for-cyclists/article_d53b9712-2e93-517d-9e33-8f13d693ba21.html. 2 Wes Johnson, “Missouri Bill Requires Bicyclists to Fly 15-Foot Flag on Country Roads,” Springfield News-Leader, January 14, 2016. 3 “School Pupils Encouraged to Wear Hi-Vis Vests in Road Safety Scheme,” Grimsby Telegraph, January 23, 2012, http://www.grimsbytelegraph.co.uk/school-pupils-encouraged-wear-hi-vis-vests-road/story-15010565-detail/story.html. 4 Chris Boardman, “Why I Didn’t Wear a Helmet on BBC Breakfast,” BritishCycling.org, November 3, 2014, https://www.britishcycling.org.uk/campaigning/article/20141103-campaigning-news-Boardman--Why-I-didn-t-wear-a-helmet-on-BBC-Breakfast-0. 5 Nick Hussey, “Why My Cycling Clothing Company Uses Models without Helmets,” The Guardian, February 4, 2016, https://www.theguardian.com/environment/bike-blog/2016/feb/04/vulpine-bike-clothing-company-models-without-helmets-dont-hate-us. 6 Peter Jacobsen and Harry Rutter, “Cycling Safety,” in Pucher and Buehler, City Cycling, ch. 7. 7 “Helmets for Pedal Cyclists and for Users of Skateboards and Roller Skates,” European Committee for Standardization, 1997, http://www.mrtn.ch/pdf/en_1078.pdf. 8 R.G. Attewell, K. Glase, and M. McFadden, “Bicycle Helmet Efficacy: A Meta-Analysis,” Accident Analysis and Prevention 33 (2001). 9 Rune Elvik, “Publication bias and time-trend bias in meta-analysis of bicycle helmet efficacy: A re-analysis of Attewell, Glase and McFadden,” Accident Analysis and Prevention 43 (2011):1245–51. 10 E-mail exchange with the author. 11 Davis, Death on the Streets. 12 1985 Durbin-Harvey report, commissioned by UK Department of Transport from two professors of statistics. 13 Ian Walker, “Drivers Overtaking Bicyclists: Objective Data on the Effects of Riding Position, Helmet Use, Vehicle Type and Apparent Gender,” Accident Analysis and Prevention 39 (2007):417–25. 14 “Wearing a Helmet Puts Cyclists at Risk, Suggests Research,” University of Bath, September 11, 2016, http://www.bath.ac.uk/news/articles/archive/overtaking110906.html. 15 Tim Gamble and Ian Walker, “Wearing a Bicycle Helmet Can Increase Risk Taking and Sensation Seeking in Adults,” Psychological Science, 2016. 16 “Helmet Wearing Increases Risk Taking and Sensation Seeking,” University of Bath, January 25, 2016, http://www.bath.ac.uk/news/2016/01/25/helmet-wearing-risk-taking. 17 Fishman et al., “Barriers and Facilitators to Public Bicycle Scheme Use: A Qualitative Approach,” Transportation Research Part F: Traffic Psychology and Behaviour 15, Vol. 6 (2012):686–98. 18 Interview with the author. 19 N.C.


pages: 266 words: 67,272

Fun Inc. by Tom Chatfield

Adrian Hon, Alexey Pajitnov wrote Tetris, An Inconvenient Truth, Any sufficiently advanced technology is indistinguishable from magic, behavioural economics, Boris Johnson, cloud computing, cognitive dissonance, computer age, credit crunch, game design, invention of writing, longitudinal study, moral panic, publication bias, Silicon Valley, Skype, stem cell, upwardly mobile

Its author, Dr Christopher John Ferguson, an assistant professor of psychology at Texas A&M International University, set out to compare every article published in a peer-reviewed journal between 1995 and April 2007 that in some way investigated the effect of playing violent video games on some measure of aggressive behaviour. A total of seventeen published studies matched these criteria – and Ferguson’s conclusions were unexpectedly unequivocal. ‘Once corrected for publication bias,’ he reported, ‘studies of video game violence provided no support for the hypothesis that violent video game-playing is associated with higher aggression.’ Moreover, he added, the question ‘do violent games cause violence?’ is itself flawed in that ‘it assumes that such games have only negative effects and ignores the possibility of positive effects’ such as the possibility that violent games allow ‘catharsis’ of a kind in their players.


pages: 218 words: 70,323

Critical: Science and Stories From the Brink of Human Life by Matt Morgan

agricultural Revolution, Atul Gawande, biofilm, Black Swan, Checklist Manifesto, cognitive dissonance, crew resource management, Daniel Kahneman / Amos Tversky, David Strachan, discovery of penicillin, en.wikipedia.org, hygiene hypothesis, job satisfaction, John Snow's cholera map, meta-analysis, personalized medicine, publication bias, randomized controlled trial, Silicon Valley, stem cell, Steve Jobs, sugar pill, traumatic brain injury

It is estimated that over half of all studies are never completed and data from one-third of trials not published. Of those that are, only half are read by more than just two people. Furthermore, journals are more likely to publish papers with positive results, conducted by well-known groups, by men and from Western countries. This introduces yet more bias, known as publication bias. So, we now have bias squared. It is on this flimsy basis that we decide how to treat patients. This selective publishing should not be acceptable in medicine. The former editor of the British Medical Journal has argued that the entire medical journal industry should be disbanded. The powerful ‘all trials’ movement led by Dr Ben Goldacre aims to publicise these issues surrounding clinical-trial data loss, manipulation and concealment.


pages: 242 words: 67,233

McMindfulness: How Mindfulness Became the New Capitalist Spirituality by Ronald Purser

"World Economic Forum" Davos, Abraham Maslow, Affordable Care Act / Obamacare, Bernie Sanders, biodiversity loss, British Empire, capitalist realism, commoditize, corporate governance, corporate social responsibility, digital capitalism, Donald Trump, Edward Snowden, fake news, Frederick Winslow Taylor, friendly fire, Goldman Sachs: Vampire Squid, housing crisis, Howard Zinn, impulse control, job satisfaction, liberation theology, Lyft, Marc Benioff, mass incarceration, meta-analysis, military-industrial complex, moral panic, Nelson Mandela, neoliberal agenda, Nicholas Carr, obamacare, placebo effect, precariat, prosperity theology / prosperity gospel / gospel of success, publication bias, Ralph Waldo Emerson, randomized controlled trial, Ronald Reagan, Salesforce, science of happiness, scientific management, shareholder value, Sheryl Sandberg, Silicon Valley, Slavoj Žižek, source of truth, stealth mode startup, TED Talk, The Spirit Level, Tony Hsieh, too big to fail, Torches of Freedom, trickle-down economics, uber lyft, work culture

As Walach puts it: “What is not answered is whether the true contribution is the mindfulness practice itself.”42 Positive effects could simply be attributed to having some downtime during the school day, or feeling heard in discussion. There is also the risk of “social desirability bias,” since children know they have been chosen as subjects in a study with expected improvements. Then there is the issue of publication bias, where only positive findings are published. A recent study by a group of psychologists at McGill University found that of the 124 randomized control studies they reviewed, 90% reported positive results.43 Such a number is quite high given the small sample sizes; a normal, non-biased threshold for this same sample size should be no more than 65%.


pages: 481 words: 72,071

Why Has Nobody Told Me This Before? by Dr. Julie Smith

Albert Einstein, COVID-19, fake news, fear of failure, meta-analysis, publication bias, randomized controlled trial, side hustle, TikTok

L. (2013), ‘Exercise-Induced Endocannabinoid Signaling Is Modulated by Intensity’, European Journal of Applied Physiology, 113 (4), 869–75. Sanchez-Villegas, A., et al. (2013), ‘Mediterranean dietary pattern and depression: the PREDIMED randomized trial’, BMC Medicine, 11, 208. Schuch, F. B., Vancampfort, D., Richards, J., et al. (2016), ‘Exercise as a treatment for depression: A Meta-Analysis Adjusting for Publication Bias’, Journal of Psychiatric Research, 77, 24–51. Singh, N. A., Clements, K. M., & Fiatrone, M. A. (1997), ‘A Randomized Controlled Trial of the Effect of Exercise on Sleep’, Sleep, 20 (2), 95–101. Tops, M., Riese, H., et al. (2008), ‘Rejection sensitivity relates to hypocortisolism and depressed mood state in young women’, Psychoneuroendocrinology, 33 (5), 551–9.


pages: 299 words: 81,377

The No Need to Diet Book: Become a Diet Rebel and Make Friends With Food by Plantbased Pixie

Albert Einstein, confounding variable, David Attenborough, employer provided health coverage, fake news, food desert, meta-analysis, microaggression, nocebo, placebo effect, publication bias, randomized controlled trial, sugar pill, ultra-processed food

Very few of these programmes publish their results and tend to stick to individual anecdotes instead, but the limited research we do have suggests that the largest weight loss was around 3.2 per cent of body weight after two years.9 Are you underwhelmed? ’Cause I sure am. On top of all that, you have to consider publication bias – scientific journals are far more likely to publish a study that shows a significant effect over something that didn’t work. Weight-loss programmes in the workplace and in schools have been equally unsuccessful. Despite appearing to be very concerned about the students’ growing waistlines, very few schools actually assess the impact of making nutritional changes on pupils’ weight.


pages: 442 words: 85,640

This Book Could Fix Your Life: The Science of Self Help by New Scientist, Helen Thomson

Abraham Wald, Black Lives Matter, caloric restriction, caloric restriction, classic study, coronavirus, correlation does not imply causation, COVID-19, David Attenborough, delayed gratification, Donald Trump, Elon Musk, fake it until you make it, Flynn Effect, George Floyd, global pandemic, hedonic treadmill, job satisfaction, Kickstarter, lock screen, lockdown, meta-analysis, microbiome, nocebo, placebo effect, publication bias, randomized controlled trial, risk tolerance, selective serotonin reuptake inhibitor (SSRI), Sheryl Sandberg, social distancing, Steve Jobs, sugar pill, sunk-cost fallacy, survivorship bias, TED Talk, TikTok, ultra-processed food, Walter Mischel

New standards of evidence were needed, more replication was essential and lots of previously accepted assumptions were now found lacking. Cuddy’s research took a particularly bad hit and was heavily criticised by peers and the media. One of the big problems with her research was that it didn’t pass the p-curve test – a statistical tool that detects ‘publication bias’.7 In simple terms, it tests whether researchers may have caused errors in their data by cherry-picking certain data points most likely to produce a publishable result, or perhaps just got lucky with their data. The power pose didn’t pass the p-curve test and was given the heave-ho. In 2018, it made something of a comeback.


Grain Brain: The Surprising Truth About Wheat, Carbs, and Sugar--Your Brain's Silent Killers by David Perlmutter, Kristin Loberg

autism spectrum disorder, caloric restriction, caloric restriction, epigenetics, Gary Taubes, Gregor Mendel, Kickstarter, longitudinal study, meta-analysis, microbiome, mouse model, phenotype, publication bias, Ralph Waldo Emerson, selective serotonin reuptake inhibitor (SSRI), stem cell

It concluded that “intake of saturated fat was not associated with an increased risk of coronary heart disease, stroke, or cardiovascular disease.” In comparing the lowest to the highest consumption of saturated fat, the actual risk for coronary heart disease was 19 percent lower in the group consuming the highest amount of saturated fat. The authors also stated: “Our results suggested a publication bias, such that studies with significant associations tended to be received more favorably for publication.” What the authors are implying is that when other studies presented conclusions that were more familiar to the mainstream (i.e., fat causes heart disease), not to mention more attractive to Big Pharma, they were more likely to get published.


pages: 274 words: 93,758

Phishing for Phools: The Economics of Manipulation and Deception by George A. Akerlof, Robert J. Shiller, Stanley B Resor Professor Of Economics Robert J Shiller

Andrei Shleifer, asset-backed security, Bear Stearns, behavioural economics, Bernie Madoff, business cycle, Capital in the Twenty-First Century by Thomas Piketty, Carl Icahn, collapse of Lehman Brothers, compensation consultant, corporate raider, Credit Default Swap, Daniel Kahneman / Amos Tversky, dark matter, David Brooks, desegregation, en.wikipedia.org, endowment effect, equity premium, financial intermediation, financial thriller, fixed income, full employment, George Akerlof, greed is good, income per capita, invisible hand, John Maynard Keynes: Economic Possibilities for our Grandchildren, junk bonds, Kenneth Arrow, Kenneth Rogoff, late fees, loss aversion, market bubble, Menlo Park, mental accounting, Michael Milken, Milgram experiment, money market fund, moral hazard, new economy, Pareto efficiency, Paul Samuelson, payday loans, Ponzi scheme, profit motive, publication bias, Ralph Nader, randomized controlled trial, Richard Thaler, Robert Shiller, Robert Solow, Ronald Reagan, Savings and loan crisis, short selling, Silicon Valley, stock buybacks, the new new thing, The Predators' Ball, the scientific method, The Theory of the Leisure Class by Thorstein Veblen, The Wealth of Nations by Adam Smith, theory of mind, Thorstein Veblen, too big to fail, transaction costs, Unsafe at Any Speed, Upton Sinclair, Vanguard fund, Vilfredo Pareto, wage slave

Bero, Benjamin Djulbegovic, and Otavio Clark, “Pharmaceutical Industry Sponsorship and Research Outcome and Quality: Systematic Review,” British Medical Journal 326, no. 7400 (May 31, 2003): 1167. Bekelman, Li, and Gross also refer to two studies of “multiple reporting of studies with positive outcomes, further compounding publication bias.” 17. Bob Grant, “Elsevier Published 6 Fake Journals,” The Scientist, May 7, 2009, accessed November 24, 2014, http://classic.the-scientist.com/blog/display/55679/. See also Ben Goldacre, Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients (New York: Faber and Faber/Farrar, Straus and Giroux, 2012), pp. 309–10. 18.


pages: 371 words: 109,320

News and How to Use It: What to Believe in a Fake News World by Alan Rusbridger

airport security, basic income, Bellingcat, Big Tech, Black Lives Matter, Bletchley Park, Boris Johnson, Brexit referendum, call centre, Cambridge Analytica, Chelsea Manning, citizen journalism, Climategate, cognitive dissonance, coronavirus, correlation does not imply causation, COVID-19, Credit Default Swap, crisis actor, cross-subsidies, crowdsourcing, disinformation, Dominic Cummings, Donald Trump, Edward Snowden, end-to-end encryption, fake news, Filter Bubble, future of journalism, George Floyd, ghettoisation, global pandemic, Google Earth, green new deal, hive mind, housing crisis, Howard Rheingold, illegal immigration, Intergovernmental Panel on Climate Change (IPCC), Jeff Bezos, Jeffrey Epstein, Jeremy Corbyn, Johann Wolfgang von Goethe, Julian Assange, Kickstarter, lockdown, Mark Zuckerberg, Murray Gell-Mann, Narrative Science, Neil Kinnock, Nelson Mandela, New Journalism, Nicholas Carr, ocean acidification, offshore financial centre, post-truth, profit motive, public intellectual, publication bias, Seymour Hersh, Snapchat, social distancing, Social Justice Warrior, Steve Bannon, tech baron, the scientific method, TikTok, universal basic income, WikiLeaks, yellow journalism

Too often statistics are trusted because they imply a level of precision, without an investigation of their validity. An important part of a journalist’s craft is to establish the incentives to report particular results. Governments can dislike uncomfortable news, particularly close to elections. Academics can be rewarded for new or exciting results, leading to publication bias – particularly if negative results do not see the light of day. A striking feature of public services across many countries has been the rise of performance monitoring, which records, analyses and publishes data in order to give the public a better idea of how systems or policies are implemented and can be improved.


pages: 410 words: 114,005

Black Box Thinking: Why Most People Never Learn From Their Mistakes--But Some Do by Matthew Syed

Abraham Wald, Airbus A320, Alfred Russel Wallace, Arthur Eddington, Atul Gawande, Black Swan, Boeing 747, British Empire, call centre, Captain Sullenberger Hudson, Checklist Manifesto, cognitive bias, cognitive dissonance, conceptual framework, corporate governance, creative destruction, credit crunch, crew resource management, deliberate practice, double helix, epigenetics, fail fast, fear of failure, flying shuttle, fundamental attribution error, Great Leap Forward, Gregor Mendel, Henri Poincaré, hindsight bias, Isaac Newton, iterative process, James Dyson, James Hargreaves, James Watt: steam engine, Johannes Kepler, Joseph Schumpeter, Kickstarter, Lean Startup, luminiferous ether, mandatory minimum, meta-analysis, minimum viable product, publication bias, quantitative easing, randomized controlled trial, selection bias, seminal paper, Shai Danziger, Silicon Valley, six sigma, spinning jenny, Steve Jobs, the scientific method, Thomas Kuhn: the structure of scientific revolutions, too big to fail, Toyota Production System, US Airways Flight 1549, Wall-E, Yom Kippur War

*This has a rather obvious analog with what is sometimes called “defensive medicine,” in which clinicians use a host of unnecessary tests that protect their backs, but massively increase health-care costs. *Science is not without flaws, and an eye should always be kept on social and institutional obstacles to progress. Current concerns include publication bias (whereonly successful experiments are published in journals), the weakness of the peer review system, and the fact that many experiments do not appear to be replicable. For a good review of the issues, see: www.economist.com/news/briefing/21588057-scientists-think-self-correcting-alarming-degree-if-not-trouble.


The Economics Anti-Textbook: A Critical Thinker's Guide to Microeconomics by Rod Hill, Anthony Myatt

American ideology, Andrei Shleifer, Asian financial crisis, bank run, barriers to entry, behavioural economics, Bernie Madoff, biodiversity loss, business cycle, cognitive dissonance, collateralized debt obligation, credit crunch, Credit Default Swap, credit default swaps / collateralized debt obligations, David Ricardo: comparative advantage, different worldview, electricity market, endogenous growth, equal pay for equal work, Eugene Fama: efficient market hypothesis, experimental economics, failed state, financial innovation, full employment, gender pay gap, Gini coefficient, Glass-Steagall Act, Gunnar Myrdal, happiness index / gross national happiness, Home mortgage interest deduction, Howard Zinn, income inequality, indoor plumbing, information asymmetry, Intergovernmental Panel on Climate Change (IPCC), invisible hand, John Maynard Keynes: Economic Possibilities for our Grandchildren, Joseph Schumpeter, Kenneth Arrow, liberal capitalism, low interest rates, low skilled workers, market bubble, market clearing, market fundamentalism, Martin Wolf, medical malpractice, military-industrial complex, minimum wage unemployment, moral hazard, Paradox of Choice, Pareto efficiency, Paul Samuelson, Peter Singer: altruism, positional goods, prediction markets, price discrimination, price elasticity of demand, principal–agent problem, profit maximization, profit motive, publication bias, purchasing power parity, race to the bottom, Ralph Nader, random walk, rent control, rent-seeking, Richard Thaler, Ronald Reagan, search costs, shareholder value, sugar pill, The Myth of the Rational Market, the payments system, The Spirit Level, The Wealth of Nations by Adam Smith, Thorstein Veblen, ultimatum game, union organizing, working-age population, World Values Survey, Yogi Berra

These results have been the subject of a ‘lively’ debate, discussed in Card and Krueger’s 1995 book Myth and Measurement.4 Some idea of the tone of the debate can be had by noting that Valentine (1996) accused Card and Krueger (1994) of practising ‘politically correct’ economics, and of deliberately using suspect data in one of their studies. For their part, Card and Krueger present evidence of ‘publication bias’ against results contrary to textbook conventional wisdom (1995: 186). A feature of the debate, key for our discussion of methodology, is that one team of authors would consistently find results different from another team. David Levine, editor of the Berkeley journal Industrial Relations, attributed this phenomenon to ‘author biases’, which he diplomatically defined as ‘conscious or unconscious biases in searching for a robust equation’ (2001: 161).


Fix Your Gut: The Definitive Guide to Digestive Disorders by John Brisson

23andMe, big-box store, biofilm, butterfly effect, clean water, Helicobacter pylori, life extension, meta-analysis, microbiome, pattern recognition, publication bias, selective serotonin reuptake inhibitor (SSRI), Silicon Valley, Zimmermann PGP

Firstly, the papers identified in our study were limited to those openly published up to Jul 2012; it is possible that some related published or unpublished studies that might meet the inclusion criteria were missed, resulting in any inevitable bias, though the funnel plots and the Egger’s tests failed to show any significant publication bias. Secondly, the results may be interpreted with care because of the limited number and small sample sizes of each included studies. Thirdly, subgroup analyses regarding other confounding factors such as smoking status, age and gender have not been conducted in the present study because sufficient information could not be extracted from the primary literature.”


The White Man's Burden: Why the West's Efforts to Aid the Rest Have Done So Much Ill and So Little Good by William Easterly

"World Economic Forum" Davos, airport security, anti-communist, Asian financial crisis, bank run, banking crisis, Bob Geldof, Bretton Woods, British Empire, call centre, clean water, colonial exploitation, colonial rule, Edward Glaeser, end world poverty, European colonialism, failed state, farmers can use mobile phones to check market prices, George Akerlof, Gunnar Myrdal, guns versus butter model, Hernando de Soto, income inequality, income per capita, Indoor air pollution, intentional community, invisible hand, Kenneth Rogoff, laissez-faire capitalism, land bank, land reform, land tenure, Live Aid, microcredit, moral hazard, Naomi Klein, Nelson Mandela, publication bias, purchasing power parity, randomized controlled trial, Ronald Reagan, Scramble for Africa, structural adjustment programs, The Fortune at the Bottom of the Pyramid, the scientific method, The Wealth of Nations by Adam Smith, Tragedy of the Commons, transaction costs, TSMC, War on Poverty, Xiaogang Anhui farmers

For the quoted passage on the motivation behind this new aid, see http://www. whitehouse.gov/infocus/developingnations/>. 11.http://www.mca.gov/countries_overview.html. 12.Esther Duflo and Michael Kremer, “Use of Randomization in the Evaluation of Development Effectiveness,” mimeograph, Harvard and MIT (2003), discuss publication bias. A classic paper on this problem is J. Bradford DeLong and Kevin Lang, “Are All Economic Hypotheses False?” Journal of Political Economy 100, no. 6 (December 1992): 1257–72. 13.UN Millennium Project Report, “Investing in Development: A Practical Plan to Achieve the Millennium Development Goals,” overview, box 8, p. 41. 14.Commission for Africa, “Our Common Interest: Report of the Commission for Africa,” p. 348; www.commissionforafrica.org/english/report/introduction.html. 15.Raghuram G.


pages: 636 words: 140,406

The Case Against Education: Why the Education System Is a Waste of Time and Money by Bryan Caplan

affirmative action, Affordable Care Act / Obamacare, assortative mating, behavioural economics, conceptual framework, correlation does not imply causation, deliberate practice, deskilling, disruptive innovation, do what you love, driverless car, en.wikipedia.org, endogenous growth, experimental subject, fear of failure, Flynn Effect, future of work, George Akerlof, ghettoisation, hive mind, job satisfaction, Kenneth Arrow, Khan Academy, labor-force participation, longitudinal study, low interest rates, low skilled workers, market bubble, mass incarceration, meta-analysis, Peter Thiel, price discrimination, profit maximization, publication bias, risk tolerance, Robert Gordon, Ronald Coase, school choice, selection bias, Silicon Valley, statistical model, Steven Pinker, The Bell Curve by Richard Herrnstein and Charles Murray, the scientific method, The Wisdom of Crowds, trickle-down economics, twin studies, Tyler Cowen, unpaid internship, upwardly mobile, women in the workforce, yield curve, zero-sum game

Arum, Richard, and Yossi Shavit. 1995. “Secondary Vocational Education and the Transition from School to Work.” Sociology of Education 68 (3): 187–204. Ashenfelter, Orley, Colm Harmon, and Hessel Oosterbeek. 1999. “A Review of Estimates of the Schooling/Earnings Relationship, with Tests for Publication Bias.” Labour Economics 6 (4): 453–70. Assaad, Ragui. 1997. “The Effects of Public Sector Hiring and Compensation Policies on the Egyptian Labor Market.” World Bank Economic Review 11 (1): 85–118. Astin, Alexander. 2005–6. “Making Sense out of Degree Completion Rates.” Journal of College Student Retention 7 (1–2): 5–17.


pages: 742 words: 166,595

The Barbell Prescription: Strength Training for Life After 40 by Jonathon Sullivan, Andy Baker

An Inconvenient Truth, complexity theory, en.wikipedia.org, epigenetics, experimental subject, Gary Taubes, indoor plumbing, junk bonds, longitudinal study, meta-analysis, moral panic, phenotype, publication bias, randomized controlled trial, selective serotonin reuptake inhibitor (SSRI), the scientific method, Y Combinator

In this pivotal chapter, we’ll survey some of that evidence. This is as good a time as any to point out an inconvenient truth about published scientific research: Like all other human endeavors, it’s about 90% shit by weight. This has always been true, and if anything it’s even more true now, as research effort is heavily impacted by publication bias, the pressures of academic life, and the corruption of science by industry, which has a decidedly non-scientific axe to grind.2 This sad fact of life does not exempt the biomedical literature,3 whether we’re talking about exercise medicine,4 cancer chemotherapy, diagnostic imaging, or even basic cell bi So I want to be perfectly up front with you: Just as you can easily find studies showing that generally accepted and widely used medical therapies do not actually produce the desired results, so are there contrary findings in the literature on strength training for various disease states and their markers.5 This overview of the literature focuses on the overwhelming preponderance of the evidence, draws heavily on physiological reasoning and experience, and would of necessity involve my own very human biases, whether I admitted it or not.


pages: 687 words: 165,457

Exercised: The Science of Physical Activity, Rest and Health by Daniel Lieberman

A. Roger Ekirch, active measures, caloric restriction, caloric restriction, classic study, clean water, clockwatching, Coronary heart disease and physical activity of work, correlation does not imply causation, COVID-19, death from overwork, Donald Trump, epigenetics, Exxon Valdez, George Santayana, hygiene hypothesis, impulse control, indoor plumbing, Kickstarter, libertarian paternalism, longitudinal study, meta-analysis, microbiome, mouse model, phenotype, placebo effect, publication bias, randomized controlled trial, Ronald Reagan, selective serotonin reuptake inhibitor (SSRI), social distancing, Steven Pinker, twin studies, two and twenty, working poor

., et al. (2019), Aerobic exercise for adult patients with major depressive disorder in mental health services: A systematic review and meta-analysis, Depression and Anxiety 36:39–53; Stubbs, B., et al. (2017), An examination of the anxiolytic effects of exercise for people with anxiety and stress-related disorders: A meta-analysis, Psychiatry Research 249:102–8; Schuch, F. B., et al. (2016), Exercise as a treatment for depression: A meta-analysis adjusting for publication bias, Journal of Psychiatric Research 77:42–51; Josefsson, T., Lindwall, M., and Archer, T. (2014), Physical exercise intervention in depressive disorders: Meta-analysis and systematic review, Scandinavian Journal of Medicine and Science in Sports 24:259–72; Wegner, M., et al. (2014), Effects of exercise on anxiety and depression disorders: Review of meta-analyses and neurobiological mechanisms, CNS and Neurological Disorders—Drug Targets 13:1002–14; Asmundson, G.


pages: 694 words: 197,804

The Pot Book: A Complete Guide to Cannabis by Julie Holland

benefit corporation, Berlin Wall, Burning Man, confounding variable, drug harm reduction, intentional community, longitudinal study, Mahatma Gandhi, mandatory minimum, Maui Hawaii, meta-analysis, pattern recognition, phenotype, placebo effect, profit motive, publication bias, RAND corporation, randomized controlled trial, Ronald Reagan, Rosa Parks, Stephen Hawking, traumatic brain injury, University of East Anglia, zero-sum game

Potential confounders were addressed in these studies, including other drug use and the question of early psychotic symptoms (Zammit et al. 2002; Arseneault et al. 2004). However, as Weiser and others have pointed out, a two- to threefold increase in risk is not so sizable and could be explained by unrecognized confounding variables (Weiser and Noy 2005b). Finally, there is also the issue of potential publication bias; negative studies that find no association between an exposure and an outcome may be less likely to be published. Biological Plausibility Biological plausibility lends support to the hypothesis of a causal association. If there is a medical basis for the phenomenon in question, it makes more sense.


pages: 1,261 words: 294,715

Behave: The Biology of Humans at Our Best and Worst by Robert M. Sapolsky

autism spectrum disorder, autonomous vehicles, behavioural economics, Bernie Madoff, biofilm, blood diamond, British Empire, Broken windows theory, Brownian motion, car-free, classic study, clean water, cognitive dissonance, cognitive load, corporate personhood, corporate social responsibility, Daniel Kahneman / Amos Tversky, delayed gratification, desegregation, different worldview, domesticated silver fox, double helix, Drosophila, Edward Snowden, en.wikipedia.org, epigenetics, Flynn Effect, framing effect, fudge factor, George Santayana, global pandemic, Golden arches theory, Great Leap Forward, hiring and firing, illegal immigration, impulse control, income inequality, intentional community, John von Neumann, Loma Prieta earthquake, long peace, longitudinal study, loss aversion, Mahatma Gandhi, meta-analysis, microaggression, mirror neurons, Mohammed Bouazizi, Monkeys Reject Unequal Pay, mouse model, mutually assured destruction, Nelson Mandela, Network effects, nocebo, out of africa, Peter Singer: altruism, phenotype, Philippa Foot, placebo effect, publication bias, RAND corporation, risk tolerance, Rosa Parks, selective serotonin reuptake inhibitor (SSRI), self-driving car, Silicon Valley, Skinner box, social contagion, social distancing, social intelligence, Stanford marshmallow experiment, Stanford prison experiment, stem cell, Steven Pinker, strikebreaker, theory of mind, Tragedy of the Commons, transatlantic slave trade, traveling salesman, trickle-down economics, trolley problem, twin studies, ultimatum game, Walter Mischel, wikimedia commons, zero-sum game, zoonotic diseases

Yancey, “The Effects of Media Violence Exposure on Criminal Aggression: A Meta-analysis,” Criminal Justice and Behav 35 (2008): 772; C. Anderson et al., “Violent Video Game Effects on Aggression, Empathy, and Prosocial Behavior in Eastern and Western Countries: A Meta-analytic Review,” Psych Bull 136, 151; C. J. Ferguson, “Evidence for Publication Bias in Video Game Violence Effects Literature: A Meta-analytic Review,” Aggression and Violent Behavior 12 (2007): 470; C. Ferguson, “The Good, the Bad and the Ugly: A Meta-analytic Review of Positive and Negative Effects of Violent Video Games,” Psychiatric Quarterly 78 (2007): 309. 42. W.


pages: 1,199 words: 332,563

Golden Holocaust: Origins of the Cigarette Catastrophe and the Case for Abolition by Robert N. Proctor

"RICO laws" OR "Racketeer Influenced and Corrupt Organizations", bioinformatics, carbon footprint, clean water, corporate social responsibility, Deng Xiaoping, desegregation, disinformation, Dr. Strangelove, facts on the ground, friendly fire, germ theory of disease, global pandemic, index card, Indoor air pollution, information retrieval, invention of gunpowder, John Snow's cholera map, language of flowers, life extension, New Journalism, optical character recognition, pink-collar, Ponzi scheme, Potemkin village, precautionary principle, publication bias, Ralph Nader, Ronald Reagan, selection bias, speech recognition, stem cell, telemarketer, Thomas Kuhn: the structure of scientific revolutions, Triangle Shirtwaist Factory, Upton Sinclair, vertical integration, Yogi Berra

Switzer denounced the EPA’s report as highly flawed and “problematic,” peppering his critique with pejoratives like “astonishing,” “equivocal,” “deceptive and pointless,” and “serious difficulties.” The Stanford statistician accused the EPA of imprecision, inconsistency, faulty interpretations, improper extrapolations, use of “crude and disputable” estimates of exposure, bias from confounding and misclassification, improper treatment of publication bias, reliance on inconsistent or improperly recorded data, and several other flaws.39 Switzer was well paid for his services, receiving a total of $647,046 from CIAR and other grants in one two-year period. He was also paid handsomely for private consultations with cartel law firms. In one three-month period in the fall of 1991 he received $26,900 from Covington & Burling for consulting on “health effects of exposure to ETS in the workplace” and an analysis of “epidemiology of spousal smoke exposure and lung cancer.”


pages: 1,157 words: 379,558

Ashes to Ashes: America's Hundred-Year Cigarette War, the Public Health, and the Unabashed Triumph of Philip Morris by Richard Kluger

air freight, Albert Einstein, book value, California gold rush, cognitive dissonance, confounding variable, corporate raider, desegregation, disinformation, double entry bookkeeping, family office, feminist movement, full employment, ghettoisation, independent contractor, Indoor air pollution, junk bonds, medical malpractice, Mikhail Gorbachev, plutocrats, power law, publication bias, Ralph Nader, Ralph Waldo Emerson, RAND corporation, rent-seeking, risk tolerance, Ronald Reagan, selection bias, stock buybacks, The Chicago School, the scientific method, Torches of Freedom, trade route, transaction costs, traveling salesman, union organizing, upwardly mobile, urban planning, urban renewal, vertical integration, War on Poverty

Public impressions to the contrary, no investigator had produced evidence remotely approaching in strength and consistency findings like those incriminating direct smoking by Wynder, Hammond and Horn, Doll and Hill, and Auerbach. The industry could thus retain the hope that a large-scale study might fail to show a correlation between lung cancer occurrence and exposure to ETS among nonsmokers. Such results, however, might not find their way into scientific journals because of a phenomenon known as “publication bias;” studies that produced negative results or did not report a statistically significant relationship were generally assigned a low priority among submissions. But in the spring of 1990, a Philip Morris scientist, Thomas J. Borelli, who bore the suggestive title of “manager of scientific issues,” was scouring about for unpublished studies on ETS and, while consulting the University Microfilms International Dissertation Information Service, struck gold.