p-value

99 results back to index


pages: 719 words: 104,316

R Cookbook by Paul Teetor

Debian, en.wikipedia.org, p-value, quantitative trading / quantitative finance, statistical model

However, it produces an annoying warning message, shown here at the bottom of the output, when the p-value is below 0.01: > library(tseries) > adf.test(x) Augmented Dickey-Fuller Test data: x Dickey-Fuller = -4.3188, Lag order = 4, p-value = 0.01 alternative hypothesis: stationary Warning message: In adf.test(x) : p-value smaller than printed p-value Fortunately, I can muzzle the function by calling it inside suppressWarnings(...): > suppressWarnings(adf.test(x)) Augmented Dickey-Fuller Test data: x Dickey-Fuller = -4.3188, Lag order = 4, p-value = 0.01 alternative hypothesis: stationary Notice that the warning message disappeared. The message is not entirely lost because R retains it internally. I can retrieve the message at my leisure by using the warnings function: > warnings() Warning message: In adf.test(x) : p-value smaller than printed p-value Some functions also produce “messages” (in R terminology), which are even more benign than warnings.

Solution Use the table function to produce a contingency table from the two factors. Then use the summary function to perform a chi-squared test of the contingency table: > summary(table(fac1,fac2)) The output includes a p-value. Conventionally, a p-value of less than 0.05 indicates that the variables are likely not independent whereas a p-value exceeding 0.05 fails to provide any such evidence. Discussion This example performs a chi-squared test on the contingency table of Recipe 9.3 and yields a p-value of 0.01255: > summary(table(initial,outcome)) Number of cases in table: 100 Number of factors: 2 Test for independence of all factors: Chisq = 8.757, df = 2, p-value = 0.01255 The small p-value indicates that the two factors, initial and outcome, are probably not independent. Practically speaking, we conclude there is some connection between the variables.

Do you notice the extreme righthand column containing double asterisks (**), a single asterisk (*), and a period(.)? That column highlights the significant variables. The line labeled "Signif. codes" at the bottom gives a cryptic guide to the flags’ meanings: *** p-value between 0 and 0.001 ** p-value between 0.001 and 0.01 * p-value between 0.01 and 0.05 . p-value between 0.05 and 0.1 (blank) p-value between 0.1 and 1.0 The column labeled Std. Error is the standard error of the estimated coefficient. The column labeled t value is the t statistic from which the p-value was calculated. Residual standard error Residual standard error: 1.625 on 26 degrees of freedom This reports the standard error of the residuals (σ)—that is, the sample standard deviation of ε. R2 (coefficient of determination) Multiple R-squared: 0.4981, Adjusted R-squared: 0.4402 R2 is a measure of the model’s quality.


pages: 442 words: 94,734

The Art of Statistics: Learning From Data by David Spiegelhalter

Antoine Gombaud: Chevalier de Méré, Bayesian statistics, Carmen Reinhart, complexity theory, computer vision, correlation coefficient, correlation does not imply causation, dark matter, Edmond Halley, Estimating the Reproducibility of Psychological Science, Hans Rosling, Kenneth Rogoff, meta analysis, meta-analysis, Nate Silver, Netflix Prize, p-value, placebo effect, probability theory / Blaise Pascal / Pierre de Fermat, publication bias, randomized controlled trial, recommendation engine, replication crisis, self-driving car, speech recognition, statistical model, The Design of Experiments, The Signal and the Noise by Nate Silver, The Wisdom of Crowds, Thomas Bayes, Thomas Malthus

Overall, of 125 ‘discoveries’, 36% (45) are false discoveries. Since all these false discoveries were based on a P-value identifying a ‘significant’ result, P-values have been increasingly blamed for a flood of incorrect scientific conclusions. In 2015 a reputable psychology journal even announced that they would ban the use of NHST (Null Hypothesis Significance Testing). Finally in 2016 the American Statistical Association (ASA) managed to get a group of statisticians to agree on six principles about P-values.fn9 The first of these principles simply points out what P-values can do: P-values can indicate how incompatible the data are with a specified statistical model. As we have repeatedly seen, P-values do this by essentially measuring how surprising the data are, given a null hypothesis that something does not exist.

Regardless of the actual experiments conducted, if the intervention really has no effect, it can be proved theoretically that any P-value that tests the null hypothesis is equally likely to take on any value between 0 and 1, and so the P-values from many studies testing the effect should tend to scatter uniformly. Whereas if there really is an effect, the P-values will tend to be skewed towards small values. The idea of the ‘P-curve’ is to look at all the actual P-values reported for significant test results – that is, when P < 0.05. Two features create suspicion. First, if there is cluster of P-values just below 0.05, it suggests some massaging has been done to tip some of them over this crucial boundary. Second, suppose these significant P-values are not skewed towards 0, but fairly uniformly scattered between 0 and 0.05. Then this is just the pattern that would occur were the null hypothesis true, and the only results being reported as significant were those 1 in 20 that tipped over P < 0.05 by luck.

prospective cohort study: when a set of individuals are identified, background factors measured, and then they are followed up and relevant outcomes observed. Such studies are lengthy and expensive, and may not identify many rare events. P-value: a measure of discrepancy between data and a null hypothesis. For a null hypothesis H0, let T be a statistic for which large values indicate inconsistency with H0. Suppose we observe a value t. Then a (one-sided) P-value is the probability of observing such an extreme value, were H0 true, that is P(T ≥ t|H0). If both small and large values of T indicate inconsistency with H0, then the two-sided P-value is the probability of observing such a large value in either direction. Often the two-sided P-value is simply taken as double the one-sided P-value, while the R software uses the total probability of events which have a lower probability of occurring than that actually observed.


Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth by Stuart Ritchie

Albert Einstein, anesthesia awareness, Bayesian statistics, Carmen Reinhart, Cass Sunstein, citation needed, Climatic Research Unit, cognitive dissonance, complexity theory, coronavirus, correlation does not imply causation, COVID-19, Covid-19, crowdsourcing, deindustrialization, Donald Trump, double helix, en.wikipedia.org, epigenetics, Estimating the Reproducibility of Psychological Science, Growth in a Time of Debt, Kenneth Rogoff, l'esprit de l'escalier, meta analysis, meta-analysis, microbiome, Milgram experiment, mouse model, New Journalism, p-value, phenotype, placebo effect, profit motive, publication bias, publish or perish, race to the bottom, randomized controlled trial, recommendation engine, rent-seeking, replication crisis, Richard Thaler, risk tolerance, Ronald Reagan, Scientific racism, selection bias, Silicon Valley, Silicon Valley startup, Stanford prison experiment, statistical model, stem cell, Steven Pinker, Thomas Bayes, twin studies, University of East Anglia

In practice, and especially in many cases of p-hacking, where the same variables are being used over and over again, the increase in the false-positive rate as a function of the number of tests won’t be quite as severe – but it’ll still get higher and higher, so the same principle applies. 52.  I should also say that there are a whole host of ways to adjust your p-value threshold if you’ve calculated a lot of them – you might only accept p-values that fall below 0.01 as significant instead of 0.05, for example. The problem is that most researchers forget to do this – or when they’re p-hacking, they don’t feel like they’ve really done so many tests, even if they have. There’s also the interesting philosophical question of how many p-values a scientist should be correcting for. Every p-value they’ve calculated in that specific paper? Every p-value they’ve calculated while researching that topic? Every p-value they’ve calculated in their entire career? What about all the p-values they might calculate in future? As with all interesting philosophical questions, there’s no simple answer.

Essentially all of them are done these days by feeding your data into computer software. When you run one of these programs, its output will include, alongside many other useful numbers, the relevant p-value.15 Despite being one of the most commonly used statistics in science, the p-value has a notoriously tricky definition. A recent audit found that a stunning 89 per cent of a sample of introductory psychology textbooks got the definition wrong; I’ll try to avoid making the same mistake here.16 The p-value is the probability that your results would look the way they look, or would seem to show an even bigger effect, if the effect you’re interested in weren’t actually present.17 Notably, the p-value doesn’t tell you the probability that your result is true (whatever that might mean), nor how important it is. It just answers the question: ‘in a world where your hypothesis isn’t true, how likely is it that pure noise would give you results like the ones you have, or ones with an even larger effect?’

Bayarri, ‘Confusion Over Measures of Evidence (p’s) Versus Errors (α’s) in Classical Statistical Testing’, American Statistician 57, no. 3 (Aug. 2003): pp. 171–78; https://doi.org/10.1198/0003130031856 17.  For the American Statistical Association’s consensus position on p-values, written surprisingly comprehensibly, see Ronald L. Wasserstein & Nicole A. Lazar, ‘The ASA Statement on p-Values: Context, Process, and Purpose’, The American Statistician 70, no. 2 (2 April 2016): pp. 129–33; https://doi.org/10.1080/0003130 5.2016.1154108. It defines the p-value like this: ‘the probability under a specified statistical model that a statistical summary of the data (e.g., the sample mean difference between two compared groups) would be equal to or more extreme than its observed value: p. 131. 18.  Why does the definition of the p-value (‘how likely is it that pure noise would give you results like the ones you have, or ones with an even larger effect’) have that ‘or an even larger effect’ clause in it?


Beginning R: The Statistical Programming Language by Mark Gardener

correlation coefficient, distributed generation, natural language processing, New Urbanism, p-value, statistical model

You might prefer to display the values as whole numbers and you can adjust the output “on the fly” by using the round() command to choose how many decimal points to display the values like so: > round(bird.cs$exp, 0) Garden Hedgerow Parkland Pasture Woodland Blackbird 60 11 24 4 2 Chaffinch 17 3 7 1 1 Great Tit 40 7 16 3 1 House Sparrow 44 8 17 3 2 Robin 8 2 3 1 0 Song Thrush 6 1 2 0 0 In this instance you chose to use no decimals at all and so use 0 as an instruction in the round() command. Monte Carlo Simulation You can decide to determine the p-value by a slightly different method and can use a Monte Carlo simulation to do this. You add an extra instruction to the chisq.test() command, simulate.p.value = TRUE, like so: > chisq.test(bird.df, simulate.p.value = TRUE, B = 2500) Pearson's Chi-squared test with simulated p-value (based on 2500 replicates) data: bird.df X-squared = 78.2736, df = NA, p-value = 0.0003998 The default is that simulate.p.value = FALSE and that B = 2000. The latter is the number of replicates to use in the Monte Carlo test, which is set to 2500 for this example. Yates’ Correction for 2 × 2 Tables When you have a 2 × 2 contingency table it is common to apply the Yates’ correction.

Now run the chi-squared test again but this time use a Monte Carlo simulation with 3000 replicates to determine the p-value: > (bees.cs = chisq.test(bees, simulate.p.value = TRUE, B = 3000)) Pearson's Chi-squared test with simulated p-value (based on 3000 replicates) data: bees X-squared = 120.6531, df = NA, p-value = 0.0003332 4. Look at a portion of the data as a 2 × 2 contingency table. Examine the effect of Yates’ correction on this subset: > bees[1:2, 4:5] Honey.bee Carder.bee Thistle 12 8 Vipers.bugloss 13 27 > chisq.test(bees[1:2, 4:5], correct = FALSE) Pearson's Chi-squared test data: bees[1:2, 4:5] X-squared = 4.1486, df = 1, p-value = 0.04167 > chisq.test(bees[1:2, 4:5], correct = TRUE) Pearson's Chi-squared test with Yates' continuity correction data: bees[1:2, 4:5] X-squared = 3.0943, df = 1, p-value = 0.07857 5. Look at the last two columns, representing two bee species. Carry out a goodness of fit test to determine if the proportions of visits are the same: > with(bees, chisq.test(Honey.bee, p = Carder.bee, rescale = T)) Chi-squared test for given probabilities data: Honey.bee X-squared = 58.088, df = 4, p-value = 7.313e-12 Warning message: In chisq.test(Honey.bee, p = Carder.bee, rescale = T) : Chi-squared approximation may be incorrect 6.

Carry out a goodness of fit test to determine if the proportions of visits are the same: > with(bees, chisq.test(Honey.bee, p = Carder.bee, rescale = T)) Chi-squared test for given probabilities data: Honey.bee X-squared = 58.088, df = 4, p-value = 7.313e-12 Warning message: In chisq.test(Honey.bee, p = Carder.bee, rescale = T) : Chi-squared approximation may be incorrect 6. Carry out the same goodness of fit test but use a simulation to determine the p-value (you can abbreviate the command): > with(bees, chisq.test(Honey.bee, p = Carder.bee, rescale = T, sim = T)) Chi-squared test for given probabilities with simulated p-value (based on 2000 replicates) data: Honey.bee X-squared = 58.088, df = NA, p-value = 0.0004998 7. Now look at a single column and carry out a goodness of fit test. This time omit the p = instruction to test the fit to equal probabilities: > chisq.test(bees$Honey.bee) Chi-squared test for given probabilities data: bees$Honey.bee X-squared = 2.5, df = 4, p-value = 0.6446 How It Works The basic form of the chisq.test() command will operate on a matrix or data frame.


Evidence-Based Technical Analysis: Applying the Scientific Method and Statistical Inference to Trading Signals by David Aronson

Albert Einstein, Andrew Wiles, asset allocation, availability heuristic, backtesting, Black Swan, butter production in bangladesh, buy and hold, capital asset pricing model, cognitive dissonance, compound rate of return, computerized trading, Daniel Kahneman / Amos Tversky, distributed generation, Elliott wave, en.wikipedia.org, feminist movement, hindsight bias, index fund, invention of the telescope, invisible hand, Long Term Capital Management, mental accounting, meta analysis, meta-analysis, p-value, pattern recognition, Paul Samuelson, Ponzi scheme, price anchoring, price stability, quantitative trading / quantitative finance, Ralph Nelson Elliott, random walk, retrograde motion, revision control, risk tolerance, risk-adjusted returns, riskless arbitrage, Robert Shiller, Robert Shiller, Sharpe ratio, short selling, source of truth, statistical model, stocks for the long run, systematic trading, the scientific method, transfer pricing, unbiased observer, yield curve, Yogi Berra

The value 0.10 is the sample statistic’s p-value. This fact is equivalent to saying that if the rule’s true return were zero, there is a 0.10 probability that its return in a back test would attain a value as high as +3.5 percent or higher due to sampling variability (chance). This is illustrated in Figure 5.9. p-value, Statistical Significance, and Rejecting the Null Hypothesis A second name for the p-value of the test statistic is the statistical significance of the test. The smaller the p-value, the more statistically significant the test result. A statistically significant result is one for which the p-value is low enough to warrant a rejection of H0. The smaller the p-value of a test statistic, the more confident we can be that a rejection of the null hypothesis is a correct decision. The p-value can be looked upon as the degree to which the observed value of the test statistic conforms to the null hypothesis (H0).

Said differently, a conditional probability is a probability that is conditional upon some other fact being true. In a hypothesis test, this conditional probability is given the special name p-value. Specifically, it is the probability that the observed value of the test statistic could have occurred conditioned upon (given that) the hypothesis being tested (H0) is true. The smaller the p-value, the greater is our justification for calling into question the truth of H0. If the p-value is less than a threshold, which must be defined before the test is carried out, H0 is rejected and HA accepted. The p-value can also be interpreted as the probability H0 will be erroneously rejected when H0 is in fact true. P-value also has a graphical interpretation. It is equal to the fraction of the sampling distribution’s total area that lies at values equal to and greater than the observed value of the test statistic.

The p-value can be looked upon as the degree to which the observed value of the test statistic conforms to the null hypothesis (H0). Larger p-values mean greater conformity, and smaller values mean less conformity. This is simply another way of saying that the more surprising (improbable) an observation is in relation to a given view of the world (the hypothesis), the more likely it is that world view is false. How small does the p-value need to be to justify a rejection of the H0? This is problem specific and relates to the cost that would be incurred by an erroneous rejection. We will deal with the matter of errors and their costs in a moment. However, there are some standards that Null Hypothesis & Sampling Distribution Mean Return Test Statistic: +3.5% p-value = 0.10 0 Area = 0.10 of total sampling distribution FIGURE 5.9 P-Value: fractional area of sampling distribution greater than +3.5%, conditional probability of +3.5% or more given that H0 is true. 233 Hypothesis Tests and Confidence Intervals are commonly used.


pages: 443 words: 51,804

Handbook of Modeling High-Frequency Data in Finance by Frederi G. Viens, Maria C. Mariani, Ionut Florescu

algorithmic trading, asset allocation, automated trading system, backtesting, Black-Scholes formula, Brownian motion, business process, buy and hold, continuous integration, corporate governance, discrete time, distributed generation, fixed income, Flash crash, housing crisis, implied volatility, incomplete markets, linear programming, mandelbrot fractal, market friction, market microstructure, martingale, Menlo Park, p-value, pattern recognition, performance metric, principal–agent problem, random walk, risk tolerance, risk/return, short selling, statistical model, stochastic process, stochastic volatility, transaction costs, value at risk, volatility smile, Wiener process

However, we could clearly see them in the figures obtained using the DFA method. 142 CHAPTER 6 Long Correlations Applied to the Study of Memory EEM:2003–2009 Cumulative distribution Cumulative distribution EEM:2003–2009 100 10–2 10–4 10 –1 0 1 100 10–2 10–4 10–1 100 101 Normalized returns (T = 4, α = 1.60) 10 10 Normalized returns (T = 1, α = 1.50) EEM:2003–2009 Cumulative distribution Cumulative distribution EEM:2003–2009 100 10–2 10–4 10–1 100 101 Normalized returns (T = 8, α = 1.60) 100 10–2 10–4 10–1 100 101 Normalized returns (T = 16, α = 1.60) DFA analysis:EEM 2003–2009 −2 −2.5 log(Fn) −3 −3.5 −4 −4.5 −5 −5.5 1 1.5 2 log(n) 2.5 3 [α = 0.74338] 3.5 4 3.5 4 Hurst analysis:EEM 2003–2009 2.2 2 1.8 1.6 log(R/S) 1.4 1.2 1 0.8 0.6 0.4 0.2 1 1.5 2 2.5 3 log(n) [H = 0.57794] FIGURE 6.5 Analysis results for EEM index using the entire period available. 143 S&P 500: 2001–2009 Cumulative distribution Cumulative distribution 6.4 Results and Discussions 100 10–2 10–4 S&P 500: 2001–2009 100 10–2 10–4 –1 10 0 1 10 10 Normalized returns (T = 8, α = 1.40) Cumulative distribution Cumulative distribution 10–1 100 101 Normalized returns (T = 1, α = 1.55) S&P 500: 2001–2009 100 10–2 10–4 10–1 100 101 Normalized returns (T = 4, α = 1.50) S&P 500: 2001–2009 0 10 10–2 10–4 10–1 100 101 Normalized returns (T = 16, α = 1.40) DFA analysis:S&P 500, 2001–2009 −3.5 log(Fn) −4 −4.5 −5 −5.5 −6 1 1.5 2 2.5 3 log(n) [α = 0.67073] 3.5 4 3.5 4 Hurst analysis: S&P 500, 2001–2009 2 1.8 1.6 log(R/S) 1.4 1.2 1 0.8 0.6 0.4 0.2 1 1.5 2 2.5 3 log(n) [H = 0.56657] FIGURE 6.6 Analysis results for S&P500 index using the entire period available. 144 CHAPTER 6 Long Correlations Applied to the Study of Memory iShares MSCI EAFE Index (EFA) Kohmogorov S.Statistic Normal 99.9 99.9 99 99 95 90 80 70 60 50 40 30 20 10 5 95 90 80 70 60 50 40 30 20 10 5 1 Mean 0.0005256 StDev 0.004476 N 252 AD 0.444 P-Value 0.283 Percent Percent iShares MSCI EAFE Index (EFA) Anderson D.Statistic Normal 1 0.1 0.1 –0.015 –0.010 –0.005 0.000 0.005 0.010 (EFA) 1/3/03 to 1/2/04 0.015 –0.015 –0.010 –0.005 0.000 0.005 0.010 (EFA) 1/3/03 to 1/2/04 99.9 99 99 95 90 80 70 60 50 40 30 20 10 5 95 90 80 70 60 50 40 30 20 10 5 Percent Percent 99.9 Mean 0.0005256 StDev 0.004476 N 252 AD 0.444 P-Value 0.283 1 0.1 –0.015 –0.010 –0.005 0.000 0.005 0.010 0.015 Sp500(827) 1/2/03 to 12/31/03 Kohmogorov S. for Sp500(827) 1/2/2003 until 12/31/2003 Normal RyanJ. for Sp500(827) 1/2/2003 until 12/31/2003 Normal 99.9 99.9 99 99 95 90 80 70 60 50 40 30 20 10 5 95 90 80 70 60 50 40 30 20 10 5 Mean 0.0004035 StDev 0.004663 N 252 RJ 0.995 P-Value 0.094 Percent Percent Mean 0.0004035 0.004663 StDev N 252 0.418 AD 0.327 P-Value 0.1 –0.015 –0.010 –0.005 0.000 0.005 0.010 0.015 (EFA) 1/3/03 to 1/2/04 1 0.015 Anderson D. for Sp500(827) 1/2/2003 until 12/31/2003 Normal Probability Plot of (EFA) 1/3/03 to 1/2/04 Normal–95%CI 1 Mean 0.0005256 StDev 0.004476 N 252 0.055 KS P-Value 0.064 1 0.1 Mean 0.0004035 StDev 0.004663 N 252 KS 0.039 P-Value >0.150 0.1 –0.015 –0.010 –0.005 0.000 0.005 0.010 0.015 –0.015 –0.010 –0.005 0.000 0.005 0.010 0.015 Sp500(827) 1/2/03 to 12/31/03 Sp500(827) 1/2/03 to 12/31/03 MSCI_(EEM) from 4/15/2003 until 12/31/2003 Anderson D. Normal Empirical CDF of EEM 4/15/03 to 12/31/03 Normal 99.9 100 95 90 80 70 60 50 40 30 20 10 5 1 80 Mean 0.001153 StDev 0.004840 N 180 AD 0.272 P-Value 0.668 Percent Percent 99 60 40 20 0 Mean StDev N 0.001153 0.004840 180 0.1 –0.015 –0.010 –0.005 0.000 0.005 0.010 0.015 0.020 EEM 4/15/03 to 12/31/03 –0.010 –0.005 0.000 0.005 0.010 EEM 4/15/03 to 12/31/03 FIGURE 6.7 Several normality tests for 2003 using the three indices. 0.015 145 6.4 Results and Discussions TABLE 6.10 Dow Jones Index and its Components: p-Value of the ADF and PP Tests of Unit Root, H Exponent and α Exponent Calculated Using R/S and DFA Analysis for All Components and Index Symbol Company ADF PP DJI AA AIG AXP BA C CAT DD DIS GE GM HD HON HPQ IBM INTC JNJ JPM KO MCD MMM MO MRK MSFT PFE PG T UTX VZ WMT XOM Dow Jones Industrial Ave ALCOA AMER INTL GROUP AMER EXPRESS BOEING CITIGROUP CATERPILLAR DU PONT E I DE NEM WALT DISNEY-DISNEY GEN ELECTRIC GEN MOTORS HOME DEPOT HONEYWELL INTL HEWLETT PACKARD INTL BUSINESS MACH INTEL CP JOHNSON AND JOHNS DC JP MORGAN CHASE COCA COLA MCDONALDS CP 3M ALTRIA GROUP MERCK MICROSOFT CP PFIZER PROCTER GAMBLE AT&T.

It is worth mentioning that while the stationarity tests reject the presence of the unit root in the characteristic polynomial that does not necessarily mean that the data is stationary, only that the particular type of nonstationarity indicated 1.0 Emp rescaled stock(x) 1.0 Emp stock0(x) 0.8 0.6 0.4 0.2 0.0 –1e 0.8 0.6 0.4 0.2 0.0 –03 –5e–04 0e+00 x 5e–04 1e–03 x FIGURE 6.1 Plot of the empirical CDF of the returns for Stock 1. (a) The image contains the original CDF. (b) The image is the same empirical CDF but rescaled so that the discontinuities are clearly seen. 130 CHAPTER 6 Long Correlations Applied to the Study of Memory TABLE 6.1 DFA and Hurst Analysis Data ADF (p-value) PP (p-value) KPSS (p-value) DFA HURST Stock 1 <0.01 <0.01 >0.1 Stock 2 <0.01 <0.01 >0.1 Stock 3 <0.01 <0.01 >0.1 Stock 4 <0.01 <0.01 >0.1 Stock 5 <0.01 <0.01 >0.1 Stock 6 <0.01 <0.01 >0.1 Stock 7 <0.01 <0.01 >0.1 Stock 8 <0.01 <0.01 >0.1 Stock 9 <0.01 <0.01 >0.1 Stock 10 <0.01 <0.01 0.07686 Stock 11 <0.01 <0.01 >0.1 Stock 12 <0.01 <0.01 >0.1 Stock 13 <0.01 <0.01 >0.1 Stock 14 <0.01 <0.01 >0.1 Stock 15 <0.01 <0.01 >0.1 Stock 16 <0.01 <0.01 >0.1 Stock 17 <0.01 <0.01 >0.1 Stock 18 <0.01 <0.01 0.076 Stock 19 <0.01 <0.01 >0.1 Stock 20 <0.01 <0.01 >0.1 Stock 21 <0.01 <0.01 >0.1 Stock 22 <0.01 <0.01 >0.1 Stock 23 <0.01 <0.01 >0.1 Stock 24 <0.01 <0.01 >0.1 0.525178 0.007037 0.64812 0.01512 0.66368 0.01465 0.66969 0.01506 0.65525 0.02916 0.74206 0.01032 0.50432 0.01212 0.66184 0.01681 0.72729 0.01383 0.79322 0.01158 0.322432 0.007075 0.70352 0.01429 0.74889 0.02081 0.70976 0.01062 0.76746 0.01029 0.62549 0.01554 0.80534 0.02432 0.69134 0.01336 0.678050 0.009018 0.48603 0.01462 0.65553 0.02517 0.70807 0.01081 0.717223 0.009553 0.45403 0.01370 0.561643 0.005423 0.490789 0.006462 0.628440 0.006138 0.644534 0.005527 0.65044 0.02908 0.722893 0.008662 0.644820 0.008521 0.38046 0.01673 0.635075 0.006374 0.654970 0.006413 0.52485 0.01265 0.596178 0.007172 0.58279 0.00825 0.578053 0.007177 0.588555 0.004527 0.61023 0.01083 0.591336 0.006912 0.596003 0.001927 0.596190 0.005278 0.59426 0.01829 0.50115 0.01086 0.552367 0.009506 0.594051 0.006709 0.37129 0.02267 131 6.3 Data TABLE 6.1 (Continued) Data ADF (p-value) PP (p-value) KPSS (p-value) DFA HURST Stock 25 <0.01 <0.01 0.02718 <0.01 <0.01 >0.1 0.63043 0.01200 0.59568 0.01464 0.646725 0.005784 0.51591 0.01586 Stock 26 Abbreviations: ADF, augmented Dickey–Fuller test for unit-root stationarity; PP, Phillips–Perron unit-root test; KPSS, Kwiatkowski–Phillips–Schmidt–ShinKwiatkowski–Phillips–Schmidt–Shin test for unit-root Stationarity; DFA, detrended fluctuation analysis; Hurst, rescale range analysis.

(b) The image is the same empirical CDF but rescaled so that the discontinuities are clearly seen. 130 CHAPTER 6 Long Correlations Applied to the Study of Memory TABLE 6.1 DFA and Hurst Analysis Data ADF (p-value) PP (p-value) KPSS (p-value) DFA HURST Stock 1 <0.01 <0.01 >0.1 Stock 2 <0.01 <0.01 >0.1 Stock 3 <0.01 <0.01 >0.1 Stock 4 <0.01 <0.01 >0.1 Stock 5 <0.01 <0.01 >0.1 Stock 6 <0.01 <0.01 >0.1 Stock 7 <0.01 <0.01 >0.1 Stock 8 <0.01 <0.01 >0.1 Stock 9 <0.01 <0.01 >0.1 Stock 10 <0.01 <0.01 0.07686 Stock 11 <0.01 <0.01 >0.1 Stock 12 <0.01 <0.01 >0.1 Stock 13 <0.01 <0.01 >0.1 Stock 14 <0.01 <0.01 >0.1 Stock 15 <0.01 <0.01 >0.1 Stock 16 <0.01 <0.01 >0.1 Stock 17 <0.01 <0.01 >0.1 Stock 18 <0.01 <0.01 0.076 Stock 19 <0.01 <0.01 >0.1 Stock 20 <0.01 <0.01 >0.1 Stock 21 <0.01 <0.01 >0.1 Stock 22 <0.01 <0.01 >0.1 Stock 23 <0.01 <0.01 >0.1 Stock 24 <0.01 <0.01 >0.1 0.525178 0.007037 0.64812 0.01512 0.66368 0.01465 0.66969 0.01506 0.65525 0.02916 0.74206 0.01032 0.50432 0.01212 0.66184 0.01681 0.72729 0.01383 0.79322 0.01158 0.322432 0.007075 0.70352 0.01429 0.74889 0.02081 0.70976 0.01062 0.76746 0.01029 0.62549 0.01554 0.80534 0.02432 0.69134 0.01336 0.678050 0.009018 0.48603 0.01462 0.65553 0.02517 0.70807 0.01081 0.717223 0.009553 0.45403 0.01370 0.561643 0.005423 0.490789 0.006462 0.628440 0.006138 0.644534 0.005527 0.65044 0.02908 0.722893 0.008662 0.644820 0.008521 0.38046 0.01673 0.635075 0.006374 0.654970 0.006413 0.52485 0.01265 0.596178 0.007172 0.58279 0.00825 0.578053 0.007177 0.588555 0.004527 0.61023 0.01083 0.591336 0.006912 0.596003 0.001927 0.596190 0.005278 0.59426 0.01829 0.50115 0.01086 0.552367 0.009506 0.594051 0.006709 0.37129 0.02267 131 6.3 Data TABLE 6.1 (Continued) Data ADF (p-value) PP (p-value) KPSS (p-value) DFA HURST Stock 25 <0.01 <0.01 0.02718 <0.01 <0.01 >0.1 0.63043 0.01200 0.59568 0.01464 0.646725 0.005784 0.51591 0.01586 Stock 26 Abbreviations: ADF, augmented Dickey–Fuller test for unit-root stationarity; PP, Phillips–Perron unit-root test; KPSS, Kwiatkowski–Phillips–Schmidt–ShinKwiatkowski–Phillips–Schmidt–Shin test for unit-root Stationarity; DFA, detrended fluctuation analysis; Hurst, rescale range analysis.


pages: 589 words: 69,193

Mastering Pandas by Femi Anthony

Amazon Web Services, Bayesian statistics, correlation coefficient, correlation does not imply causation, Debian, en.wikipedia.org, Internet of things, natural language processing, p-value, random walk, side project, statistical model, Thomas Bayes

In more formal terms, we would normally define a threshold or alpha value and reject the null hypothesis if the p-value ≤ α or fail to reject otherwise. The typical values for α are 0.05 or 0.01. Following list explains the different values of alpha: p-value <0.01: There is VERY strong evidence against H0 0.01 < p-value < 0.05: There is strong evidence against H0 0.05 < p-value < 0.1: There is weak evidence against H0 p-value > 0.1: There is little or no evidence against H0 Therefore, in this case, we would reject the null hypothesis and give credence to Intelligenza's claim and state that their claim is highly significant. The evidence against the null hypothesis in this case is significant. There are two methods that we use to determine whether to reject the null hypothesis: The p-value approach The rejection region approach The approach that we used in the preceding example was the latter one.

The alpha and p-values In order to conduct an experiment to decide for or against our null hypothesis, we need to come up with an approach that will enable us to make the decision in a concrete and measurable way. To do this test of significance, we have to consider two numbers—the p-value of the test statistic and the threshold level of significance, which is also known as alpha. The p-value is the probability if the result we observe by assuming that the null hypothesis is true or it occurred by occurred by chance alone. The p-value can also be thought of as the probability of obtaining a test statistic as extreme as or more extreme than the actual obtained test statistic, given that the null hypothesis is true. The alpha value is the threshold value against which we compare p-values. This gives us a cut-off point in order to accept or reject the null hypothesis.

In general, the rule is as follows: If the p-value is less than or equal to alpha (p< .05), then we reject the null hypothesis and state that the result is statistically significant. If the p-value is greater than alpha (p > .05), then we have failed to reject the null hypothesis, and we say that the result is not statistically significant. The seemingly arbitrary values of alpha in usage are one of the shortcomings of the frequentist methodology, and there are many questions concerning this approach. The following article in the Nature journal highlights some of the problems: http://www.nature.com/news/scientific-method-statistical-errors-1.14700. For more details on this topic, refer to: http://statistics.about.com/od/Inferential-Statistics/a/What-Is-The-Difference-Between-Alpha-And-P-Values.htm http://bit.ly/1GzYX1P http://en.wikipedia.org/wiki/P-value Type I and Type II errors There are two type of errors, as explained here: Type I Error: In this type of error, we reject H0 when in fact H0 is true.


pages: 579 words: 76,657

Data Science from Scratch: First Principles with Python by Joel Grus

correlation does not imply causation, natural language processing, Netflix Prize, p-value, Paul Graham, recommendation engine, SpamAssassin, statistical model

One way to convince yourself that this is a sensible estimate is with a simulation: extreme_value_count = 0 for _ in range(100000): num_heads = sum(1 if random.random() < 0.5 else 0 # count # of heads for _ in range(1000)) # in 1000 flips if num_heads >= 530 or num_heads <= 470: # and count how often extreme_value_count += 1 # the # is 'extreme' print extreme_value_count / 100000 # 0.062 Since the p-value is greater than our 5% significance, we don’t reject the null. If we instead saw 532 heads, the p-value would be: two_sided_p_value(531.5, mu_0, sigma_0) # 0.0463 which is smaller than the 5% significance, which means we would reject the null. It’s the exact same test as before. It’s just a different way of approaching the statistics. Similarly, we would have: upper_p_value = normal_probability_above lower_p_value = normal_probability_below For our one-sided test, if we saw 525 heads we would compute: upper_p_value(524.5, mu_0, sigma_0) # 0.061 which means we wouldn’t reject the null. If we saw 527 heads, the computation would be: upper_p_value(526.5, mu_0, sigma_0) # 0.047 and we would reject the null. Warning Make sure your data is roughly normally distributed before using normal_probability_above to compute p-values.

In a situation like this, where n is much larger than k, we can use normal_cdf and still feel good about ourselves: def p_value(beta_hat_j, sigma_hat_j): if beta_hat_j > 0: # if the coefficient is positive, we need to compute twice the # probability of seeing an even *larger* value return 2 * (1 - normal_cdf(beta_hat_j / sigma_hat_j)) else: # otherwise twice the probability of seeing a *smaller* value return 2 * normal_cdf(beta_hat_j / sigma_hat_j) p_value(30.63, 1.174) # ~0 (constant term) p_value(0.972, 0.079) # ~0 (num_friends) p_value(-1.868, 0.131) # ~0 (work_hours) p_value(0.911, 0.990) # 0.36 (phd) (In a situation not like this, we would probably be using statistical software that knows how to compute the t-distribution, as well as how to compute the exact standard errors.) While most of the coefficients have very small p-values (suggesting that they are indeed nonzero), the coefficient for “PhD” is not “significantly” different from zero, which makes it likely that the coefficient for “PhD” is random rather than meaningful. In more elaborate regression scenarios, you sometimes want to test more elaborate hypotheses about the data, such as “at least one of the is non-zero” or “ equals and equals ,” which you can do with an F-test, which, alas, falls outside the scope of this book.

So a 5%-significance test involves using normal_probability_below to find the cutoff below which 95% of the probability lies: hi = normal_upper_bound(0.95, mu_0, sigma_0) # is 526 (< 531, since we need more probability in the upper tail) type_2_probability = normal_probability_below(hi, mu_1, sigma_1) power = 1 - type_2_probability # 0.936 This is a more powerful test, since it no longer rejects when X is below 469 (which is very unlikely to happen if is true) and instead rejects when X is between 526 and 531 (which is somewhat likely to happen if is true). === p-values An alternative way of thinking about the preceding test involves p-values. Instead of choosing bounds based on some probability cutoff, we compute the probability — assuming is true — that we would see a value at least as extreme as the one we actually observed. For our two-sided test of whether the coin is fair, we compute: def two_sided_p_value(x, mu=0, sigma=1): if x >= mu: # if x is greater than the mean, the tail is what's greater than x return 2 * normal_probability_above(x, mu, sigma) else: # if x is less than the mean, the tail is what's less than x return 2 * normal_probability_below(x, mu, sigma) If we were to see 530 heads, we would compute: two_sided_p_value(529.5, mu_0, sigma_0) # 0.062 Note Why did we use 529.5 instead of 530?


pages: 250 words: 64,011

Everydata: The Misinformation Hidden in the Little Data You Consume Every Day by John H. Johnson

Affordable Care Act / Obamacare, Black Swan, business intelligence, Carmen Reinhart, cognitive bias, correlation does not imply causation, Daniel Kahneman / Amos Tversky, Donald Trump, en.wikipedia.org, Kenneth Rogoff, labor-force participation, lake wobegon effect, Long Term Capital Management, Mercator projection, Mercator projection distort size, especially Greenland and Africa, meta analysis, meta-analysis, Nate Silver, obamacare, p-value, PageRank, pattern recognition, publication bias, QR code, randomized controlled trial, risk-adjusted returns, Ronald Reagan, selection bias, statistical model, The Signal and the Noise by Nate Silver, Thomas Bayes, Tim Cook: Apple, wikimedia commons, Yogi Berra

It’s a measure of how probable it is that the effect we’re seeing is real (rather than due to chance occurrence), which is why it’s typically measured with a p-value. P, in this case, stands for probability. If you accept p-values as a measure of statistical significance, then the lower your p-value is, the less likely it is that the results you’re seeing are due to chance alone.17 One oft-accepted measure of statistical significance is a p-value of less than .05 (which equates to 5 percent probability). The widespread use of this threshold goes back to the 1920s, when it was popularized by Ronald Fisher, a mathematician who studied the effect of fertilizer on crops, among other things.18 Now, we’re not here to debate whether a p-value of .05 is an appropriate standard for statistical significance, or even whether p-values themselves are the right way to determine statistical significance.19 Instead, we’re here to tell you that p-values—including the .05 threshold—are the standard in many applications.

The widespread use of this threshold goes back to the 1920s, when it was popularized by Ronald Fisher, a mathematician who studied the effect of fertilizer on crops, among other things.18 Now, we’re not here to debate whether a p-value of .05 is an appropriate standard for statistical significance, or even whether p-values themselves are the right way to determine statistical significance.19 Instead, we’re here to tell you that p-values—including the .05 threshold—are the standard in many applications. And that’s why they matter to you. Because when you see an article about the latest scientific discovery, it’s quite likely that it has only been accepted by the scientific community—and reported by the media—because it has a p-value below .05. It may seem somewhat arbitrary, but, as Derek Daniels, PhD (an associate professor at the University at Buffalo) told us, “having a line allows us to stay objective. If there’s no line, then we make a big deal out of a p-value of 0.06 when it helps us, and we ignore a p-value of 0.04 when it hurts us.”20 TAKE A DEEP BREATH Now let’s go back to the secondhand smoke study, and see what the research actually said—that passive smoking “did not statistically significantly increase lung cancer risk.”

., a horse’s statistical odds of winning a race might be ⅓, which means it is probable that the horse will win one out of every three races; in betting jargon, the odds are typically the reverse, so this same horse would have 2–1 odds against, which means it has a ⅔ chance of losing) Omitted variable—A variable that plays a role in a relationship, but may be overlooked or otherwise not included; omitted variables are one of the primary reasons why correlation doesn’t equal causation Outlier—A particular observation that doesn’t fit; it may be much higher (or lower) than all the other data, or perhaps it just doesn’t fall into the pattern of everything else that you’re seeing P-hacking—Named after p-values, p-hacking is a term for the practice of repeatedly analyzing data, trying to find ways to make nonsignificant results significant P-value—A way to measure statistical significance; the lower your p-value is, the less likely it is that the results you’re seeing are due to chance Population—The entire set of data or observations that you want to study and draw inferences about; statisticians rarely have the ability to look at the entire population in a study, although it could be possible with a small, well-defined group (e.g., the voting habits of all 100 U.S. senators) Prediction—See forecast Prediction error—A way to measure uncertainty in the future, essentially by comparing the predicted results to the actual outcomes, once they occur Prediction interval—The range in which we expect to see the next data point Probabilistic forecast—A forecast where you determine the probability of an outcome (e.g., there is a 30 percent chance of thunderstorms tomorrow) Probability—The likelihood (typically expressed as a percentage, fraction, or decimal) that an outcome will occur Proxy—A factor that you believe is closely related (but not identical) to another difficult-to-measure factor (e.g., IQ is a proxy for innate ability) Random—When an observed pattern is due to chance, rather than some observable process or event Risk—A term that can mean different things to different people; in general, risk takes into account not only the probability of an event, but also the consequences Sample—Part of the full population (e.g., the set of Challenger launches with O-ring failures) Sample selection—A potential statistical problem that arises when the way a sample has been chosen is directly related to the outcomes one is studying; also, sometimes used to describe the process of determining a sample from a population Sampling error—The uncertainty of not knowing if a sample represents the true value in the population or not Selection bias—A potential concern when a sample is comprised of those who chose to participate, a factor which may bias the results Spurious correlation—A statistical relationship between two factors that has no practical or economic meaning, or one that is driven by an omitted variable (e.g., the relationship between murder rates and ice cream consumption) Statistic—A numeric measure that describes an aspect of the data (e.g., a mean, a median, a mode) Statistical impact—Having a statistically significant effect of some undetermined size Statistical significance—A probability-based method to determine whether an observed effect is truly present in the data, or just due to random chance Summary statistic—Metric that provides information about one or more aspects of the data; averages and aggregated data are two examples of summary statistics Weighted average—An average calculated by assigning each value a weight (based on the value’s relative importance) NOTES Preface 1.


pages: 50 words: 13,399

The Elements of Data Analytic Style by Jeff Leek

correlation does not imply causation, Netflix Prize, p-value, pattern recognition, Ronald Coase, statistical model

This chapter builds on and expands the book author’s data sharing guide. 12.8 Common mistakes 12.8.1 Not using a script for your analysis If you describe your analysis in written documentation, it is much easier to make mistakes of reproducibility. 12.8.2 Not recording version numbers or parameters used It is important to record: (1) the type of computer used, (2) the version of all software used, and (3) all parameters you used when performing an analysis. 12.8.3 Not sharing data or code For every analysis you perform you should include a link to the code and data you used to perform the analysis. 12.8.4 Using reproducibility as a weapon If you reproduce someone else’s analysis and identify a problem, bug or mistake you should contact them and try to help them resolve the problem rather that pointing the problem out publicly or humiliating them. 13. A few matters of form Report estimates followed by parentheses. The increase is 5.3 units (95% CI: 3.1, 4.3 units) When reporting P-values do not report numbers below machine precision. P-values less than 2 x 10e-16 are generally below machine precision and inaccurate. Reporting a P-value of 1.35 x 10e-25 is effectively reporting a P-value of 0 and caution should be urged. A common approach is to report censored P-values such as P < 1 x 10e-8. When reporting permutation P-values avoid reporting a value of zero. P-values should be calculated as (K + 1)/(B + 1) where B is the number of permutations and K is the number of times the null statistic is more extreme than the upper bound. Do not report estimates with over-precision.

In general the bigger the sample size the better and sample size and data size aren’t always tightly correlated. 6.12 Common errors 6.12.1 Failing to account for dependencies If data are measured across time, across space they will likely be dependent. Before performing inference each variable should be plotted versus time to detect dependencies, and similarly for space. Similarly, identifying potential confounders should occur before model fitting. 6.12.2 Focusing on p-values over confidence intervals P-values can be a useful measure of statistical significance if used properly. However, a p-value alone is not sufficient for any convincing analysis. A measure of inference on a scientific scale (such as confidence intervals or credible intervals) should be reported and interpreted with every p-value. 6.12.3 Inference without exploration A very common mistake is to move directly to model fitting and calculation of statistical significance. Before these steps, it is critical to tidy, check, and explore the data to identify dataset specific conditions that may violate your model assumptions. 6.12.4 Assuming the statistical model fit is good Once a statistical model is fit to data it is critical to evaluate how well the model describes the data.


Analysis of Financial Time Series by Ruey S. Tsay

Asian financial crisis, asset allocation, Bayesian statistics, Black-Scholes formula, Brownian motion, business cycle, capital asset pricing model, compound rate of return, correlation coefficient, data acquisition, discrete time, frictionless, frictionless market, implied volatility, index arbitrage, Long Term Capital Management, market microstructure, martingale, p-value, pattern recognition, random walk, risk tolerance, short selling, statistical model, stochastic process, stochastic volatility, telemarketer, transaction costs, value at risk, volatility smile, Wiener process, yield curve

The Ljung–Box statistics of the standardized shocks give Q(10) = 13.66 with p-value 0.19, confirming that the mean equation is adequate. However, the Ljung–Box statistics for the squared standardized shocks show Q(10) = 23.83 with p value 0.008. The volatility equation is inadequate at the 5% level. We refine the model by considering an ARCH(2) model and obtain rt = 0.0225 + at . 2 2 σt2 = 0.0113 + 0.226at−1 + 0.108at−2 , (3.12) where the standard errors of the parameters are 0.006, 0.002, 0.135, and 0.094, 2 respectively. The coefficient of at−1 is marginally significant at the 10% level, but 2 that of at−2 is only slightly greater than its standard error. The Ljung–Box statistics for the squared standardized shocks give Q(10) = 8.82 with p value 0.55. Consequently, the fitted ARCH(2) model appears to be adequate.

The two sample ACFs are very close to each other, and they suggest that the serial correlations of monthly IBM stock returns are very small, if any. The sample ACFs are all within their two standard-error limits, indicating that they are not significant at the 5% level. In addition, for the simple returns, the Ljung–Box statistics give Q(5) = 5.4 and Q(10) = 14.1, which correspond to p value of 0.37 and 0.17, respectively, based on chi-squared distributions with 5 and 10 degrees of freedom. For the log returns, we have Q(5) = 5.8 and Q(10) = 13.7 with p value 0.33 and 0.19, respectively. The joint tests confirm that monthly IBM stock returns have no significant serial correlations. Figure 2.2 shows the same for the monthly returns of the value-weighted index from the Center for Research in Security Prices (CRSP), University of Chicago. There are some significant serial correlations at the 5% level for both return series.

If a fitted model is found to be inadequate, it must be refined. Consider the residual series of the fitted AR(3) model for the monthly valueweighted simple returns. We have Q(10) = 15.8 with p value 0.027 based on its asymptotic chi-squared distribution with 7 degrees of freedom. Thus, the null hypothesis of no residual serial correlation in the first 10 lags is rejected at the 5% level, but not at the 1% level. If the model is refined to an AR(5) model, then we have rt = 0.0092 + 0.107rt−1 − 0.001rt−2 − 0.123rt−3 + 0.028rt−4 + 0.069rt−5 + ât , with σ̂a = 0.054. The AR coefficients at lags 1, 3, and 5 are significant at the 5% level. The Ljung–Box statistics give Q(10) = 11.2 with p value 0.048. This model shows some improvements and appears to be marginally adequate at the 5% significance level. The mean of rt based on the refined model is also very close to 0.01, showing that the two models have similar long-term implications. 2.4.3 Forecasting Forecasting is an important application of time series analysis.


Statistics in a Nutshell by Sarah Boslaugh

Antoine Gombaud: Chevalier de Méré, Bayesian statistics, business climate, computer age, correlation coefficient, experimental subject, Florence Nightingale: pie chart, income per capita, iterative process, job satisfaction, labor-force participation, linear programming, longitudinal study, meta analysis, meta-analysis, p-value, pattern recognition, placebo effect, probability theory / Blaise Pascal / Pierre de Fermat, publication bias, purchasing power parity, randomized controlled trial, selection bias, six sigma, statistical model, The Design of Experiments, the scientific method, Thomas Bayes, Vilfredo Pareto

Edwards, Quality Improvement, Run Charts and Control Charts dependent variables, Independent and Dependent Variables–Independent and Dependent Variables, Glossary of Statistical Terms definition of, Glossary of Statistical Terms independent variables and, Independent and Dependent Variables–Independent and Dependent Variables descriptive statistics, Inferential Statistics–Inferential Statistics, Descriptive Statistics and Graphic Displays, Populations and Samples, Measures of Central Tendency–Comparing the Mean, Median, and Mode, The Mean–The Mean, The Median–The Median, The Mode–Comparing the Mean, Median, and Mode, Comparing the Mean, Median, and Mode–Comparing the Mean, Median, and Mode, Comparing the Mean, Median, and Mode–Comparing the Mean, Median, and Mode, Measures of Dispersion–Measures of Dispersion, The Range and Interquartile Range–The Range and Interquartile Range, The Variance and Standard Deviation–The Variance and Standard Deviation, Outliers–Outliers, Graphic Methods–Frequency Tables, Frequency Tables–Frequency Tables, Bar Charts–Pie Charts, Pie Charts, Pareto Charts–Pareto Charts, The Stem-and-Leaf Plot, The Boxplot–The Boxplot, The Histogram–Bivariate Charts, Bivariate Charts, Scatterplots–Scatterplots, Line Graphs–Line Graphs, Scatterplots–Relationships Between Continuous Variables, Descriptive Statistics–Extrapolation and Trends, Descriptive Statistics–Extrapolation and Trends, Measures of Central Tendency–Measures of Central Tendency about, Descriptive Statistics and Graphic Displays critiquing an article, Descriptive Statistics–Extrapolation and Trends graphical methods, Graphic Methods–Frequency Tables, Frequency Tables–Frequency Tables, Bar Charts–Pie Charts, Pie Charts, Pareto Charts–Pareto Charts, The Stem-and-Leaf Plot, The Boxplot–The Boxplot, The Histogram–Bivariate Charts, Bivariate Charts, Scatterplots–Scatterplots, Line Graphs–Line Graphs, Scatterplots–Relationships Between Continuous Variables about, Graphic Methods–Frequency Tables bar charts, Bar Charts–Pie Charts bivariate charts, Bivariate Charts boxplot, The Boxplot–The Boxplot frequency tables, Frequency Tables–Frequency Tables histogram, The Histogram–Bivariate Charts line graphs, Line Graphs–Line Graphs Pareto charts, Pareto Charts–Pareto Charts pie charts, Pie Charts scatterplots, Scatterplots–Scatterplots, Scatterplots–Relationships Between Continuous Variables stem-and-leaf plot, The Stem-and-Leaf Plot interpretation of, Descriptive Statistics–Extrapolation and Trends measures of central tendency, Measures of Central Tendency–Comparing the Mean, Median, and Mode, The Mean–The Mean, The Median–The Median, The Mode–Comparing the Mean, Median, and Mode, Comparing the Mean, Median, and Mode–Comparing the Mean, Median, and Mode, Comparing the Mean, Median, and Mode–Comparing the Mean, Median, and Mode, Measures of Central Tendency–Measures of Central Tendency critiquing choice in article of, Measures of Central Tendency–Measures of Central Tendency mean, The Mean–The Mean, Comparing the Mean, Median, and Mode–Comparing the Mean, Median, and Mode median, The Median–The Median, Comparing the Mean, Median, and Mode–Comparing the Mean, Median, and Mode mode, The Mode–Comparing the Mean, Median, and Mode measures of dispersion, Measures of Dispersion–Measures of Dispersion, The Range and Interquartile Range–The Range and Interquartile Range, The Variance and Standard Deviation–The Variance and Standard Deviation about, Measures of Dispersion–Measures of Dispersion range and interquartile range, The Range and Interquartile Range–The Range and Interquartile Range variance and standard deviation, The Variance and Standard Deviation–The Variance and Standard Deviation outliers in, Outliers–Outliers vs. inferential statistics, Inferential Statistics–Inferential Statistics, Populations and Samples designing research studies, Research Design, Basic Vocabulary–Basic Vocabulary, Basic Vocabulary, Basic Vocabulary, Basic Vocabulary, Basic Vocabulary, Basic Vocabulary, Basic Vocabulary, Observational Studies–Observational Studies, Quasi-Experimental Studies–Quasi-Experimental Studies, Experimental Studies–Experimental Studies, Ingredients of a Good Design–Ingredients of a Good Design, Gathering Experimental Data–Blocking and the Latin Square, Specifying Treatment Levels, Specifying Response Variables, Blinding, Retrospective Adjustment, Blocking and the Latin Square, Example Experimental Design–Example Experimental Design, Communicating with Statistics–Writing for Your Workplace, Issues in Research Design–The Power of Coincidence about, Research Design blinding, Blinding blocking and Latin square, Blocking and the Latin Square classification of studies, Basic Vocabulary communicating with statistics, Communicating with Statistics–Writing for Your Workplace data types, Basic Vocabulary example of, Example Experimental Design–Example Experimental Design experimental studies, Experimental Studies–Experimental Studies factor in, Basic Vocabulary factorial design, Basic Vocabulary gathering experimental data, Gathering Experimental Data–Blocking and the Latin Square hypothesis testing vs. data mining, Specifying Response Variables ingredients of good design, Ingredients of a Good Design–Ingredients of a Good Design issues in, Issues in Research Design–The Power of Coincidence observational studies, Observational Studies–Observational Studies physical vs. social sciences definition of treatments, Specifying Treatment Levels quasi-experimental studies, Quasi-Experimental Studies–Quasi-Experimental Studies retrospective adjustment, Retrospective Adjustment style of notation, Basic Vocabulary types of design, Basic Vocabulary–Basic Vocabulary unit of analysis in study, Basic Vocabulary detection bias, Information Bias, Glossary of Statistical Terms deviation score, The Pearson Correlation Coefficient deviations from mean, sum of, The Variance and Standard Deviation DFA (Discriminant Function Analysis), Cluster Analysis, Discriminant Function Analysis–Discriminant Function Analysis dice, Dice, Coins, and Playing Cards–Dice, Coins, and Playing Cards dichotomous items, in testing, Test Construction direct matching, Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio direct standardization, Crude, Category-Specific, and Standardized Rates–Crude, Category-Specific, and Standardized Rates discrete data, Continuous and Discrete Data, Glossary of Statistical Terms definition of, Glossary of Statistical Terms vs. continuous data, Continuous and Discrete Data discrete distribution, binomial distribution as, The Binomial Distribution–The Binomial Distribution Discriminant Function Analysis (DFA), Cluster Analysis, Discriminant Function Analysis–Discriminant Function Analysis discussion section, writing, Writing the Article disease frequency, measures of, Measures of Disease Frequency dispersion, measures of, Measures of Dispersion–Measures of Dispersion, The Range and Interquartile Range–The Range and Interquartile Range, The Variance and Standard Deviation–The Variance and Standard Deviation about, Measures of Dispersion–Measures of Dispersion range and interquartile range, The Range and Interquartile Range–The Range and Interquartile Range variance and standard deviation, The Variance and Standard Deviation–The Variance and Standard Deviation double blind, Blinding, Glossary of Statistical Terms Durbin-Watson statistic, Assumptions–Assumptions E EBCDIC (Extended Binary Coded Decimal Interchange Code), String and Numeric Data ecological, Basic Vocabulary, Basic Vocabulary, Observational Studies fallacy, Basic Vocabulary studies, Basic Vocabulary validity, Observational Studies educational and psychological statistics, Educational and Psychological Statistics–Educational and Psychological Statistics, Percentiles–Percentiles, Standardized Scores–Standardized Scores, Test Construction–Test Construction, Classical Test Theory: The True Score Model–Classical Test Theory: The True Score Model, Reliability of a Composite Test–Reliability of a Composite Test, Measures of Internal Consistency–Coefficient Alpha, Item Analysis–Item Analysis, Item Response Theory–Item Response Theory about, Educational and Psychological Statistics–Educational and Psychological Statistics classical test theory, Classical Test Theory: The True Score Model–Classical Test Theory: The True Score Model item analysis, Item Analysis–Item Analysis item response theory, Item Response Theory–Item Response Theory measures of internal consistency, Measures of Internal Consistency–Coefficient Alpha percentiles, Percentiles–Percentiles reliability of composite test, Reliability of a Composite Test–Reliability of a Composite Test standardized scores, Standardized Scores–Standardized Scores test construction, Test Construction–Test Construction 80–20 rule, Pareto Charts EMA (Exponential Moving Average), Time Series–Time Series email addresses and web page, for book, How to Contact Us Engineering Statistics Handbook (National Institute of Standards and Technology), Unequal Variance t-Test epidemiological and medical statistics, Medical and Epidemiological Statistics, Measures of Disease Frequency, Prevalence and Incidence–Prevalence and Incidence, Prevalence and Incidence–Prevalence and Incidence, Prevalence and Incidence–Prevalence and Incidence, Prevalence and Incidence, Crude, Category-Specific, and Standardized Rates–Crude, Category-Specific, and Standardized Rates, Crude, Category-Specific, and Standardized Rates–Crude, Category-Specific, and Standardized Rates, Crude, Category-Specific, and Standardized Rates–Crude, Category-Specific, and Standardized Rates, The Risk Ratio–Attributable Risk, Attributable Risk Percentage, and Number Needed to Treat, The Odds Ratio–The Odds Ratio, Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio–Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio, Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio–Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio, Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio–Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio, Power Analysis–Power Analysis about, Medical and Epidemiological Statistics category-specific rates, Crude, Category-Specific, and Standardized Rates–Crude, Category-Specific, and Standardized Rates confounding, Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio–Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio crude rate, Crude, Category-Specific, and Standardized Rates–Crude, Category-Specific, and Standardized Rates incidence, Prevalence and Incidence–Prevalence and Incidence Mantel-Haenszel (MH) common odds ratio, Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio–Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio measures of disease frequency, Measures of Disease Frequency odds ratio, The Odds Ratio–The Odds Ratio power analysis, Power Analysis–Power Analysis prevalence, Prevalence and Incidence–Prevalence and Incidence, Prevalence and Incidence ratio, proportion, and rate, Prevalence and Incidence–Prevalence and Incidence risk ratio, The Risk Ratio–Attributable Risk, Attributable Risk Percentage, and Number Needed to Treat standardization, Crude, Category-Specific, and Standardized Rates–Crude, Category-Specific, and Standardized Rates stratified analysis, Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio–Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio equations, The General Linear Model, Solving Equations–Systems of Equations, Systems of Equations–Systems of Equations, Graphing Equations–Graphing Equations, Graphing Equations, Linear Inequalities–Linear Inequalities graphing, Graphing Equations–Graphing Equations linear, Graphing Equations linear equations, The General Linear Model linear inequalities, Linear Inequalities–Linear Inequalities solving, Solving Equations–Systems of Equations systems of, Systems of Equations–Systems of Equations error scores, True and Error Scores–Random and Systematic Error, Glossary of Statistical Terms definition of, Glossary of Statistical Terms true scores and, True and Error Scores–Random and Systematic Error etiologic fraction, Attributable Risk, Attributable Risk Percentage, and Number Needed to Treat Euclidean distance, Cluster Analysis events, definition of, Events Excel, Graphic Methods, Bar Charts, The Rectangular Data File, Spreadsheets and Relational Databases, Microsoft Excel–Microsoft Excel bar charts in, Bar Charts for data management, Spreadsheets and Relational Databases graphing in, Graphic Methods rectangular data file in, The Rectangular Data File using for statistical package, Microsoft Excel–Microsoft Excel expected values, The Chi-Square Test experimental design, Research Design, Basic Vocabulary–Basic Vocabulary, Basic Vocabulary, Basic Vocabulary, Basic Vocabulary, Basic Vocabulary, Basic Vocabulary, Basic Vocabulary, Observational Studies–Observational Studies, Quasi-Experimental Studies–Quasi-Experimental Studies, Experimental Studies–Experimental Studies, Ingredients of a Good Design–Ingredients of a Good Design, Gathering Experimental Data–Blocking and the Latin Square, Specifying Treatment Levels, Specifying Response Variables, Blinding, Retrospective Adjustment, Blocking and the Latin Square, Example Experimental Design–Example Experimental Design, Communicating with Statistics–Writing for Your Workplace, Issues in Research Design–The Power of Coincidence about, Research Design blinding, Blinding blocking and Latin square, Blocking and the Latin Square classification of studies, Basic Vocabulary communicating with statistics, Communicating with Statistics–Writing for Your Workplace data types, Basic Vocabulary example of, Example Experimental Design–Example Experimental Design experimental studies, Experimental Studies–Experimental Studies factor in, Basic Vocabulary factorial design, Basic Vocabulary gathering experimental data, Gathering Experimental Data–Blocking and the Latin Square hypothesis testing vs. data mining, Specifying Response Variables ingredients of good design, Ingredients of a Good Design–Ingredients of a Good Design issues in, Issues in Research Design–The Power of Coincidence observational studies, Observational Studies–Observational Studies physical vs. social sciences definition of treatments, Specifying Treatment Levels quasi-experimental studies, Quasi-Experimental Studies–Quasi-Experimental Studies retrospective adjustment, Retrospective Adjustment style of notation, Basic Vocabulary types of, Basic Vocabulary–Basic Vocabulary unit of analysis in study, Basic Vocabulary experimental units, Experimental Studies, Identifying Experimental Units–Identifying Experimental Units about, Experimental Studies identifying, Identifying Experimental Units–Identifying Experimental Units Exponential Moving Average (EMA), Time Series–Time Series exponents, Properties of Real Numbers–Exponents and Roots Extended Binary Coded Decimal Interchange Code (EBCDIC), String and Numeric Data extrapolation and trends, Extrapolation and Trends critiquing in articles, Extrapolation and Trends F F-test, omnibus, Post Hoc Tests face validity, Validity Factor Analysis (FA), Factor Analysis–Factor Analysis factor, in research design, Basic Vocabulary factorial, Factorial ANOVA–Factorial ANOVA, Interaction–Interaction, Two-Way ANOVA–Two-Way ANOVA, Three-Way ANOVA–Three-Way ANOVA, ANCOVA–ANCOVA ANCOVA (Analysis of Covariance), ANCOVA–ANCOVA ANOVA, Factorial ANOVA–Factorial ANOVA, Interaction–Interaction, Two-Way ANOVA–Two-Way ANOVA, Three-Way ANOVA–Three-Way ANOVA about, Factorial ANOVA–Factorial ANOVA interaction among factors, Interaction–Interaction three-way, Three-Way ANOVA–Three-Way ANOVA two-way, Two-Way ANOVA–Two-Way ANOVA factorial design, Basic Vocabulary, Glossary of Statistical Terms factorials, Permutations, Factorials, Permutations, and Combinations–Factorials, Permutations, and Combinations Fisher’s Exact Test, Fisher’s Exact Test–Fisher’s Exact Test floor effect, The Boxplot forecasting, Time Series formulas, About Formulas–About Formulas forward entry, in stepwise methods for building regression models, Methods for Building Regression Models, Forward entry–Forward entry fractions, properties of, Fractions–Fractions frequency matching, Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio frequency tables, The Mean–The Mean, Frequency Tables–Frequency Tables calculating mean using, The Mean–The Mean graphical methods and, Frequency Tables–Frequency Tables Friedman test, Friedman Test–Friedman Test fully crossed design, Basic Vocabulary funnel plot, evaluating publication bias using, Quick Checklist G gambling and statistics, Closing Note: The Connection between Statistics and Gambling–Closing Note: The Connection between Statistics and Gambling gamma (Goodman and Kruskal’s gamma), Ordinal Variables–Ordinal Variables General Electric, Quality Improvement General Linear Model (GLM), The General Linear Model–The General Linear Model, Linear Regression–Assumptions, Analysis of Variance (ANOVA)–Post Hoc Tests, Factorial ANOVA–Three-Way ANOVA, ANCOVA–ANCOVA, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Dummy Variables–Dummy Variables, Methods for Building Regression Models–Backward removal, Logistic Regression–Converting Logits to Probabilities, Multinomial Logistic Regression–Multinomial Logistic Regression, Polynomial Regression–Polynomial Regression, Polynomial Regression–Polynomial Regression, Polynomial Regression–Polynomial Regression, Overfitting–Overfitting, Ingredients of a Good Design about, The General Linear Model–The General Linear Model Analysis of Covariance (ANCOVA), ANCOVA–ANCOVA Analysis of Variance (ANOVA), Analysis of Variance (ANOVA)–Post Hoc Tests arbitrary curve-fitting, Overfitting–Overfitting cubic regression model, Polynomial Regression–Polynomial Regression factorial ANOVA, Factorial ANOVA–Three-Way ANOVA linear regression, Linear Regression–Assumptions logistic regression, Logistic Regression–Converting Logits to Probabilities multinomial logistic regression, Multinomial Logistic Regression–Multinomial Logistic Regression multiple linear regression, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Dummy Variables–Dummy Variables, Methods for Building Regression Models–Backward removal about, Multiple Regression Models–Multiple Regression Models adding interaction term, Multiple Regression Models–Multiple Regression Models assumptions, Multiple Regression Models creating a correlation matrix, Multiple Regression Models–Multiple Regression Models dummy variables, Dummy Variables–Dummy Variables methods for building regression models, Methods for Building Regression Models–Backward removal modeling principles, Multiple Regression Models–Multiple Regression Models regression equation for data, Multiple Regression Models–Multiple Regression Models results for individual predictors, Multiple Regression Models–Multiple Regression Models standardized coefficients, Multiple Regression Models variables in model, Multiple Regression Models–Multiple Regression Models polynomial regression, Polynomial Regression–Polynomial Regression quadratic regression model, Polynomial Regression–Polynomial Regression research design structured toward, Ingredients of a Good Design general public, writing for, Writing for the General Public–Writing for the General Public, Common Problems–Common Problems glossary of statistical terms, Glossary of Statistical Terms–Glossary of Statistical Terms Goodman and Kruskal’s gamma, Ordinal Variables–Ordinal Variables Gosset, William Sealy, The t Distribution Graduate Record Examination (GRE), Factor Analysis graphical methods, Graphic Methods–Frequency Tables, Graphic Methods–Frequency Tables, Frequency Tables–Frequency Tables, Bar Charts–Pie Charts, Pie Charts, Pareto Charts–Pareto Charts, The Stem-and-Leaf Plot, The Boxplot–The Boxplot, The Histogram–Bivariate Charts, Bivariate Charts, Scatterplots–Scatterplots, Line Graphs–Line Graphs, Scatterplots–Relationships Between Continuous Variables about, Graphic Methods–Frequency Tables, Graphic Methods–Frequency Tables bar charts, Bar Charts–Pie Charts bivariate charts, Bivariate Charts, Scatterplots–Scatterplots, Line Graphs–Line Graphs, Scatterplots–Relationships Between Continuous Variables about, Bivariate Charts line graphs, Line Graphs–Line Graphs scatterplots, Scatterplots–Scatterplots, Scatterplots–Relationships Between Continuous Variables boxplot, The Boxplot–The Boxplot frequency tables, Frequency Tables–Frequency Tables histogram, The Histogram–Bivariate Charts Pareto charts, Pareto Charts–Pareto Charts pie charts, Pie Charts stem-and-leaf plot, The Stem-and-Leaf Plot graphical power calculator, Power Analysis graphical presentation of data, critiquing in articles, Graphical Presentation of Data–Graphical Presentation of Data graphing equations, Graphing Equations–Graphing Equations Greek alphabet table, Glossary of Statistical Terms grouped bar chart, Bar Charts grouped data, mean for, The Mean grouped mean, calculating, The Mean Guttman-Kaiser criterion, Factor Analysis, Factor Analysis H high-stakes tests, Measures of Internal Consistency histogram, The Histogram–Bivariate Charts Hotelling’s Canonical Correlation Analysis (CCA), Factor Analysis How to Lie with Statistics (Huff), Line Graphs, Power for the Test of the Difference between Two Sample Means (Independent Samples t-Test) hypothesis testing, Hypothesis Testing, Specifying Response Variables I ICC (Item Characteristic Curve), Item Response Theory identifying treatments and controls, in gathering experimental data, Identifying Treatments and Controls incidence, Prevalence and Incidence–Prevalence and Incidence, Glossary of Statistical Terms Incidence Density (ID), Prevalence and Incidence Incidence Rate (IR), Prevalence and Incidence independent samples (two-sample) t-test, The Independent Samples t-Test–Confidence Interval for the Independent Samples t-Test, Confidence Interval for a Proportion independent trials, Independence independent variables, Independent and Dependent Variables–Independent and Dependent Variables, Glossary of Statistical Terms definition of, Glossary of Statistical Terms dependent variables and, Independent and Dependent Variables–Independent and Dependent Variables index numbers, Index Numbers–Index Numbers, Glossary of Statistical Terms index of discrimination, Item Analysis index of temporal stability, Reliability indirect standardization, Crude, Category-Specific, and Standardized Rates–Crude, Category-Specific, and Standardized Rates inferential statistics, Inferential Statistics–Inferential Statistics, Inferential Statistics–Inferential Statistics, Inferential Statistics, Probability Distributions–The Binomial Distribution, Independent and Dependent Variables–Independent and Dependent Variables, Populations and Samples–Probability Sampling, The Central Limit Theorem–The Central Limit Theorem, Hypothesis Testing, Confidence Intervals–Confidence Intervals, p-valuesp-values, The Z-Statistic–The Z-Statistic, Data Transformations–Data Transformations, Populations and Samples, Extrapolation and Trends–Linear regression about, Inferential Statistics–Inferential Statistics central limit theorem, The Central Limit Theorem–The Central Limit Theorem confidence intervals, Confidence Intervals–Confidence Intervals data transformations, Data Transformations–Data Transformations hypothesis testing, Hypothesis Testing incorrect use of tests in inferential statistics, Extrapolation and Trends–Linear regression independent variables and dependent variables, Independent and Dependent Variables–Independent and Dependent Variables mean in, Inferential Statistics p-values, p-valuesp-values populations and samples, Populations and Samples–Probability Sampling probability distributions in, Probability Distributions–The Binomial Distribution vs. descriptive statistics, Inferential Statistics–Inferential Statistics, Populations and Samples Z-statistic, The Z-Statistic–The Z-Statistic information bias, Information Bias–Information Bias, Glossary of Statistical Terms information, converting data into, Basic Concepts of Measurement informative censoring, Bias in Sample Selection and Retention interaction effects, Basic Vocabulary interaction variable, Glossary of Statistical Terms intercept, Graphing Equations intermediate response variable, Specifying Response Variables internal consistency, Glossary of Statistical Terms internal consistency reliability, Reliability internal consistency, measures of, Measures of Internal Consistency–Coefficient Alpha interquartile range, The Range and Interquartile Range–The Range and Interquartile Range, Glossary of Statistical Terms interrupted time series, Quasi-Experimental Studies intersection, Intersection, Intersection of independent events, Intersection of nonindependent events of independent events, Intersection of independent events of nonindependent events, Intersection of nonindependent events of simple events, Intersection interval data, Interval Data, Glossary of Statistical Terms about, Interval Data definition of, Glossary of Statistical Terms interval estimates, Confidence Intervals interviewer bias, Information Bias–Information Bias invariant difficulty, Item Response Theory investigations, checklist for statistics based, Quick Checklist–Quick Checklist item analysis, Item Analysis–Item Analysis Item Characteristic Curve (ICC), Item Response Theory item difficulty (signified as p), Test Construction–Test Construction, Item Analysis item discrimination, Item Analysis Item Response Theory (IRT), Item Response Theory–Item Response Theory J joint frequencies, The Chi-Square Test, The Risk Ratio journal clubs, presenting at, Linear regression journals, Writing for a Professional Journal–Writing the Article, Writing for a Professional Journal–The Peer Review Process, The Peer Review Process–The Peer Review Process, Common Problems–Common Problems, Quick Checklist–Quick Checklist, Issues in Research Design–The Power of Coincidence, Descriptive Statistics–Extrapolation and Trends, Extrapolation and Trends–Linear regression checklist for statistics based investigations, Quick Checklist–Quick Checklist common problems in articles, Common Problems–Common Problems critiquing descriptive statistics, Descriptive Statistics–Extrapolation and Trends incorrect use of tests in inferential statistics, Extrapolation and Trends–Linear regression issues in research design, Issues in Research Design–The Power of Coincidence peer review process, The Peer Review Process–The Peer Review Process writing for, Writing for a Professional Journal–Writing the Article, Writing for a Professional Journal–The Peer Review Process K Kaiser normalization, Factor Analysis kappa (kappa coefficient), Measures of Agreement–Measures of Agreement Kendall, Maurice, Ordinal Variables–Ordinal Variables Kendall’s tau-a, Ordinal Variables, Ordinal Variables Kendall’s tau-b, Ordinal Variables–Ordinal Variables Kendall’s tau-c, Ordinal Variables Knight, William, binomial distribution probability tables, The Binomial Distribution–The Binomial Distribution Kolmogorov–Smirnov test, Data Transformations–Data Transformations Kruskal-Wallis H test, Kruskal-Wallis H Test–Kruskal-Wallis H Test Kuder-Richardson formulas, Coefficient Alpha, Coefficient Alpha L lag, Time Series large-sample Z test for proportions, Proportions: The Large Sample Case–Proportions: The Large Sample Case Laspeyres index, Index Numbers–Index Numbers Latin square, in experimental design, Blocking and the Latin Square LDFs (Linear Discriminant Functions), Discriminant Function Analysis Levene’s test, Unequal Variance t-Test Likert scale, Exercises, The Likert and Semantic Differential Scales, Glossary of Statistical Terms Likert, Rensis, The Likert and Semantic Differential Scales line graphs, Line Graphs–Line Graphs linear algebra, Relationships Between Continuous Variables Linear Discriminant Functions (LDFs), Discriminant Function Analysis linear equations, The General Linear Model, Graphing Equations linear inequalities, Linear Inequalities–Linear Inequalities linear regression, Linear Regression–Linear Regression, Assumptions–Assumptions, Calculating Simple Regression by Hand–Calculating Simple Regression by Hand, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Dummy Variables–Dummy Variables, Methods for Building Regression Models–Backward removal, Logistic Regression–Converting Logits to Probabilities, Logistic Regression, Multinomial Logistic Regression–Multinomial Logistic Regression, Polynomial Regression–Polynomial Regression, Polynomial Regression–Polynomial Regression, Polynomial Regression–Polynomial Regression, Overfitting–Overfitting, Linear regression about, Linear Regression–Linear Regression arbitrary curve-fitting, Overfitting–Overfitting assumptions, Assumptions–Assumptions calculating by hand, Calculating Simple Regression by Hand–Calculating Simple Regression by Hand cubic regression model, Polynomial Regression–Polynomial Regression logistic regression, Logistic Regression–Converting Logits to Probabilities logit outcome variable, Logistic Regression multinomial logistic regression, Multinomial Logistic Regression–Multinomial Logistic Regression multiple, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Dummy Variables–Dummy Variables, Methods for Building Regression Models–Backward removal about, Multiple Regression Models–Multiple Regression Models adding interaction term, Multiple Regression Models–Multiple Regression Models assumptions, Multiple Regression Models creating a correlation matrix, Multiple Regression Models–Multiple Regression Models dummy variables, Dummy Variables–Dummy Variables methods for building regression models, Methods for Building Regression Models–Backward removal modeling principles, Multiple Regression Models–Multiple Regression Models regression equation for data, Multiple Regression Models–Multiple Regression Models results for individual predictors, Multiple Regression Models–Multiple Regression Models standardized coefficients, Multiple Regression Models variables in model, Multiple Regression Models–Multiple Regression Models polynomial regression, Polynomial Regression–Polynomial Regression quadratic, Polynomial Regression–Polynomial Regression violations of assumptions of, Linear regression literature review, Writing the Article, Evaluating the Whole Article critiquing in articles, Evaluating the Whole Article writing, Writing the Article Little, Donald B., String and Numeric Data–Missing Data local independence assumption, Item Response Theory logarithms (log), Properties of Roots–Properties of Roots, Solving Equations about, Properties of Roots–Properties of Roots in solving equations, Solving Equations logistic regression, Logistic Regression–Converting Logits to Probabilities logit outcome variable, Logistic Regression, Converting Logits to Probabilities–Converting Logits to Probabilities M Mahalanobis distance, Cluster Analysis main effects, Basic Vocabulary Manhattan distance, Cluster Analysis Mann-Whitney U test, The Wilcoxon Rank Sum Test Mantel-Haenszel (MH) common odds ratio, Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio–Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio marginal frequencies, The Risk Ratio marginals, The Chi-Square Test matching, Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio The Mathematics of Games and Gambling (Packel), Closing Note: The Connection between Statistics and Gambling maturation bias, Blocking and the Latin Square maximax decision making procedure, Minimax, Maximax, and Maximin–Minimax, Maximax, and Maximin, Glossary of Statistical Terms maximin decision making procedure, Minimax, Maximax, and Maximin–Minimax, Maximax, and Maximin, Glossary of Statistical Terms McNemar’s chi-square test, McNemar’s Test for Matched Pairs–McNemar’s Test for Matched Pairs mean, About Formulas, Inferential Statistics, The Mean–The Mean, The Variance and Standard Deviation, Glossary of Statistical Terms definition of, Glossary of Statistical Terms formula for, About Formulas in descriptive statistics, The Mean–The Mean in inferential statistics, Inferential Statistics sum of deviations from, The Variance and Standard Deviation mean rank, The Wilcoxon Rank Sum Test measurement, Basic Concepts of Measurement–Proxy Measurement, Measurement, Random and Systematic Error–Random and Systematic Error, Reliability and Validity–Triangulation about, Measurement random errors vs. systematic errors, Random and Systematic Error–Random and Systematic Error reliability and validity, Reliability and Validity–Triangulation types of, Basic Concepts of Measurement–Proxy Measurement measurement bias, types of, Measurement Bias–Information Bias measurement error, Classical Test Theory: The True Score Model measures of central tendency, Inferential Statistics, Measures of Central Tendency, The Mean–The Mean, The Mean–The Mean, The Mean–The Mean, The Median–The Median, The Mode–Comparing the Mean, Median, and Mode, Comparing the Mean, Median, and Mode–Comparing the Mean, Median, and Mode, Measures of Central Tendency–Measures of Central Tendency about, Measures of Central Tendency critiquing choice in article of, Measures of Central Tendency–Measures of Central Tendency in descriptive statistics, The Mean–The Mean, The Mean–The Mean mean, The Mean–The Mean, The Mean–The Mean mean, Inferential Statistics, The Mean–The Mean in descriptive statistics, The Mean–The Mean in inferential statistics, Inferential Statistics median, The Median–The Median, Comparing the Mean, Median, and Mode–Comparing the Mean, Median, and Mode vs. median and mode, Comparing the Mean, Median, and Mode–Comparing the Mean, Median, and Mode mode in, The Mode–Comparing the Mean, Median, and Mode measures of disease frequency, Measures of Disease Frequency measures of dispersion, Measures of Dispersion–Measures of Dispersion, The Range and Interquartile Range–The Range and Interquartile Range, The Variance and Standard Deviation–The Variance and Standard Deviation about, Measures of Dispersion–Measures of Dispersion range and interquartile range, The Range and Interquartile Range–The Range and Interquartile Range variance and standard deviation, The Variance and Standard Deviation–The Variance and Standard Deviation measures of internal consistency, Measures of Internal Consistency–Coefficient Alpha median, The Median–The Median, Glossary of Statistical Terms median test, The Median Test–The Median Test medical and epidemiological statistics, Medical and Epidemiological Statistics, Measures of Disease Frequency, Ratio, Proportion, and Rate–Ratio, Proportion, and Rate, Prevalence and Incidence–Prevalence and Incidence, Prevalence and Incidence–Prevalence and Incidence, Prevalence and Incidence, Crude, Category-Specific, and Standardized Rates–Crude, Category-Specific, and Standardized Rates, Crude, Category-Specific, and Standardized Rates–Crude, Category-Specific, and Standardized Rates, Crude, Category-Specific, and Standardized Rates–Crude, Category-Specific, and Standardized Rates, The Risk Ratio–Attributable Risk, Attributable Risk Percentage, and Number Needed to Treat, The Odds Ratio–The Odds Ratio, Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio–Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio, Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio–Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio, Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio–Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio, Power Analysis–Power Analysis about, Medical and Epidemiological Statistics category-specific rates, Crude, Category-Specific, and Standardized Rates–Crude, Category-Specific, and Standardized Rates confounding, Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio–Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio crude rate, Crude, Category-Specific, and Standardized Rates–Crude, Category-Specific, and Standardized Rates incidence, Prevalence and Incidence–Prevalence and Incidence Mantel-Haenszel (MH) common odds ratio, Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio–Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio measures of disease frequency, Measures of Disease Frequency odds ratio, The Odds Ratio–The Odds Ratio power analysis, Power Analysis–Power Analysis prevalence, Prevalence and Incidence–Prevalence and Incidence, Prevalence and Incidence ratio, proportion, and rate, Ratio, Proportion, and Rate–Ratio, Proportion, and Rate risk ratio, The Risk Ratio–Attributable Risk, Attributable Risk Percentage, and Number Needed to Treat standardization, Crude, Category-Specific, and Standardized Rates–Crude, Category-Specific, and Standardized Rates stratified analysis, Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio–Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio methods for building regression models, Methods for Building Regression Models–Methods for Building Regression Models, Methods for Building Regression Models–Methods for Building Regression Models, Methods for Building Regression Models automated, Methods for Building Regression Models blocking, Methods for Building Regression Models–Methods for Building Regression Models stepwise, Methods for Building Regression Models–Methods for Building Regression Models about, Methods for Building Regression Models–Methods for Building Regression Models methods section, writing, Writing the Article Microsoft Access, for data management, Spreadsheets and Relational Databases Microsoft Excel, Graphic Methods, Bar Charts, The Rectangular Data File, Spreadsheets and Relational Databases, Microsoft Excel–Microsoft Excel bar charts in, Bar Charts for data management, Spreadsheets and Relational Databases graphing in, Graphic Methods rectangular data file in, The Rectangular Data File using for statistical package, Microsoft Excel–Microsoft Excel minimax decision making procedure, Minimax, Maximax, and Maximin–Minimax, Maximax, and Maximin, Glossary of Statistical Terms Minitab, Minitab–Minitab Minnesota Multiphase Personality Inventory- II (MMPI-II), Standardized Scores misusing statistics, The Misuse of Statistics–The Misuse of Statistics MMPI-II (Minnesota Multiphase Personality Inventory- II), Standardized Scores mode, The Mode, Glossary of Statistical Terms Motorola, Quality Improvement MTMM (multitrait, multimethod matrix), Triangulation multicollinearity, Multiple Regression Models multinomial logistic regression, Multinomial Logistic Regression–Multinomial Logistic Regression multiple linear regression, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Dummy Variables–Dummy Variables, Methods for Building Regression Models–Backward removal about, Multiple Regression Models–Multiple Regression Models adding interaction term, Multiple Regression Models–Multiple Regression Models assumptions, Multiple Regression Models creating a correlation matrix, Multiple Regression Models–Multiple Regression Models dummy variables, Dummy Variables–Dummy Variables methods for building regression models, Methods for Building Regression Models–Backward removal modeling principles, Multiple Regression Models–Multiple Regression Models regression equation for data, Multiple Regression Models–Multiple Regression Models results for individual predictors, Multiple Regression Models–Multiple Regression Models standardized coefficients, Multiple Regression Models variables in model, Multiple Regression Models–Multiple Regression Models multiple-forms (parallel-forms) reliability, Reliability, Reliability of a Composite Test multiple-occasions reliability (test-retest reliability), Reliability, Reliability of a Composite Test multivariate, Bivariate Charts mutual exclusive events, Mutual Exclusivity N Naperian logarithms, Properties of Roots National Institute of Standards and Technology, Unequal Variance t-Test Engineering Statistics Handbook, Unequal Variance t-Test National Institute of Standards and Technology (U.S.

.)), Probability Tables for Common Distributions NNT (Number Needed to Treat), Attributable Risk, Attributable Risk Percentage, and Number Needed to Treat nominal data, Nominal Data–Nominal Data, Glossary of Statistical Terms about, Nominal Data–Nominal Data definition of, Glossary of Statistical Terms nonparametric statistics, Data Transformations, Nonparametric Statistics, Nonparametric Statistics, Glossary of Statistical Terms about, Nonparametric Statistics definition of, Glossary of Statistical Terms parametric statistics and, Data Transformations, Nonparametric Statistics nonprobability sampling, Nonprobability Sampling–Nonprobability Sampling, Glossary of Statistical Terms nonresponse bias, Bias in Sample Selection and Retention, Glossary of Statistical Terms norm group, Percentiles norm-referenced, Percentiles, Test Construction scoring, Percentiles tests, Test Construction normal distribution, The Normal Distribution–The Normal Distribution, The Histogram normal distribution, standard, The Standard Normal Distribution–The t-Distribution normal score, The Normal Distribution–The Normal Distribution, Percentiles normalized scores, The Normal Distribution–The Normal Distribution null hypothesis, Hypothesis Testing number line, Linear regression, Laws of Arithmetic Number Needed to Treat (NNT), Attributable Risk, Attributable Risk Percentage, and Number Needed to Treat numeric and string data, String and Numeric Data O observational studies, Observational Studies–Observational Studies observed score, Glossary of Statistical Terms observed values, The Chi-Square Test odds ratio, The Odds Ratio–The Odds Ratio odds, calculating, The Odds Ratio OLS (Ordinary Least Squares) regression equation, Independent and Dependent Variables omnibus F-test, Post Hoc Tests one-group pretest-posttest design, Quasi-Experimental Studies one-sample t-test, The One-Sample t-Test–Confidence Interval for the One-Sample t-Test one-way ANOVA, The t-Test, One-Way ANOVA–One-Way ANOVA about, One-Way ANOVA–One-Way ANOVA t-test and, The t-Test online resources, Online Resources–Online Textbooks operationalization, Operationalization, Glossary of Statistical Terms opportunity loss table, Minimax, Maximax, and Maximin ordinal data, Ordinal Data–Ordinal Data, Categorical Data, The R×C Table–The R×C Table, Measures of Agreement–Measures of Agreement, The Wilcoxon Rank Sum Test, The Wilcoxon Rank Sum Test, Glossary of Statistical Terms about, Ordinal Data–Ordinal Data, Categorical Data definition of, Glossary of Statistical Terms mean rank, The Wilcoxon Rank Sum Test measures of agreement, Measures of Agreement–Measures of Agreement rank sum, The Wilcoxon Rank Sum Test R×C table, The R×C Table–The R×C Table ordinal variables, correlation statistics for, Ordinal Variables, Ordinal Variables–Ordinal Variables, Ordinal Variables, Ordinal Variables–Ordinal Variables, Ordinal Variables, Ordinal Variables, Ordinal Variables–Ordinal Variables gamma, Ordinal Variables–Ordinal Variables Kendall’s tau-a, Ordinal Variables, Ordinal Variables Kendall’s tau-b, Ordinal Variables–Ordinal Variables Kendall’s tau-c, Ordinal Variables Somers’s d, Ordinal Variables–Ordinal Variables Spearman’s rank-order coefficient, Ordinal Variables Ordinary Least Squares (OLS) regression equation, Independent and Dependent Variables orthogonality, in research design structure, Ingredients of a Good Design outliers, Outliers–Outliers overfitting, Overfitting–Overfitting P p-values, p-valuesp-values, The Z-Statistic about, p-valuesp-values of Z value, The Z-Statistic Paasche index, Index Numbers–Index Numbers Packel, Edward, The Mathematics of Games and Gambling, Closing Note: The Connection between Statistics and Gambling parallel-forms (multiple-forms) reliability, Reliability parameters, in descriptive statistics, Inferential Statistics, Populations and Samples parametric statistics, Data Transformations, Nonparametric Statistics, Glossary of Statistical Terms Pareto charts (diagrams), Pareto Charts–Pareto Charts Pareto, Vilfredo, Pareto Charts partial correlation, Methods for Building Regression Models PCA (Principal Components Analysis), Factor Analysis–Factor Analysis, Factor Analysis, Factor Analysis–Factor Analysis Pearson correlation coefficient, Correlation Statistics for Categorical Data, Binary Variables, The Pearson Correlation Coefficient, Association–Association, Scatterplots–Relationships Between Continuous Variables, Relationships Between Continuous Variables–Relationships Between Continuous Variables, The Pearson Correlation Coefficient–Testing Statistical Significance for the Pearson Correlation, Testing Statistical Significance for the Pearson Correlation–Testing Statistical Significance for the Pearson Correlation, The Coefficient of Determination about, Correlation Statistics for Categorical Data, Binary Variables, The Pearson Correlation Coefficient about correlation coefficient, The Pearson Correlation Coefficient–Testing Statistical Significance for the Pearson Correlation associations, Association–Association coefficient of determination, The Coefficient of Determination relationships between continuous variables, Relationships Between Continuous Variables–Relationships Between Continuous Variables scatterplots as visual tool, Scatterplots–Relationships Between Continuous Variables testing statistical significance for, Testing Statistical Significance for the Pearson Correlation–Testing Statistical Significance for the Pearson Correlation Pearson’s chi-square test, The Chi-Square Test (see chi-square test) peer review process, journal, The Peer Review Process–The Peer Review Process percent agreement measures, Measures of Agreement percentages, interpreting, Power for the Test of the Difference between Two Sample Means (Independent Samples t-Test) percentiles, Percentiles–Percentiles permutations, Factorials, Permutations, and Combinations–Factorials, Permutations, and Combinations permutations of elements, Permutations phi coefficient, Binary Variables–Binary Variables, Item Analysis physical vs. social sciences, definition of treatments, Specifying Treatment Levels pie charts, Pie Charts placebo, Glossary of Statistical Terms placebo effect, Blinding, Glossary of Statistical Terms playing cards, Dice, Coins, and Playing Cards point estimates, calculating, Confidence Intervals point-biserial correlation coefficient, The Point-Biserial Correlation Coefficient–The Point-Biserial Correlation Coefficient, Item Analysis polynomial regression, Polynomial Regression–Polynomial Regression populations, Inferential Statistics, Inferential Statistics, Populations and Samples–Probability Sampling, Descriptive Statistics and Graphic Displays–Populations and Samples, The Mean–The Mean, The Variance and Standard Deviation, The Variance and Standard Deviation, Population in descriptive statistics, Descriptive Statistics and Graphic Displays–Populations and Samples, The Mean–The Mean, The Variance and Standard Deviation, The Variance and Standard Deviation calculating variance, The Variance and Standard Deviation formula for standard deviation, The Variance and Standard Deviation mean, The Mean–The Mean samples and, Descriptive Statistics and Graphic Displays–Populations and Samples in inferential statistics, Inferential Statistics, Inferential Statistics, Populations and Samples–Probability Sampling mean, Inferential Statistics samples and, Populations and Samples–Probability Sampling variance, Inferential Statistics issues in research design with, Population positive discrimination, Item Analysis post hoc test, Post Hoc Tests–Post Hoc Tests, Glossary of Statistical Terms posttest only design, Quasi-Experimental Studies posttest-only non-equivalent groups design, Quasi-Experimental Studies power, Glossary of Statistical Terms power analysis, Power Analysis–Power Analysis, Ingredients of a Good Design power of coincidence, issues in research design with, The Power of Coincidence Practical Nonparametric Statistics (Conover), Nonparametric Statistics presidential elections, predictions of, Exercises pretest-posttest design with comparison group, Quasi-Experimental Studies prevalence, Prevalence and Incidence–Prevalence and Incidence, Prevalence and Incidence, Glossary of Statistical Terms primary data, Basic Vocabulary Principal Components Analysis (PCA), Factor Analysis–Factor Analysis probability, Defining Probability–Intersection of nonindependent events, Expressing the Probability of an Event–Expressing the Probability of an Event, Conditional Probabilities–Conditional Probabilities conditional, Conditional Probabilities–Conditional Probabilities definition of, Defining Probability–Intersection of nonindependent events of events, Expressing the Probability of an Event–Expressing the Probability of an Event probability distributions, in inferential statistics, Probability Distributions–The Binomial Distribution probability sampling, Probability Sampling–Probability Sampling, Glossary of Statistical Terms probability tables for distributions, Probability Tables for Common Distributions–The Chi-Square Distribution, The Standard Normal Distribution–The t-Distribution, The t-Distribution–The t-Distribution, The Binomial Distribution–The Binomial Distribution, The Chi-Square Distribution–The Chi-Square Distribution about, Probability Tables for Common Distributions–The Chi-Square Distribution binomial distribution, The Binomial Distribution–The Binomial Distribution chi-square distribution, The Chi-Square Distribution–The Chi-Square Distribution standard normal distribution, The Standard Normal Distribution–The t-Distribution t-distribution, The t-Distribution–The t-Distribution probability theory, Probability–Probability, About Formulas–About Formulas, About Formulas–Combinations, Defining Probability–Intersection of nonindependent events, Bayes’ Theorem–Bayes’ Theorem, Closing Note: The Connection between Statistics and Gambling–Closing Note: The Connection between Statistics and Gambling about, Probability–Probability Bayes’ theorem and, Bayes’ Theorem–Bayes’ Theorem defining probability, Defining Probability–Intersection of nonindependent events definitions in, About Formulas–Combinations formulas, About Formulas–About Formulas gambling and, Closing Note: The Connection between Statistics and Gambling–Closing Note: The Connection between Statistics and Gambling product-moment correlation coefficient, The Pearson Correlation Coefficient propensity score, Observational Studies properties of equality, Solving Equations proportion, Proportions: The Large Sample Case–Proportions: The Large Sample Case, Ratio, Proportion, and Rate, Ratio, Proportion, and Rate, Glossary of Statistical Terms about, Ratio, Proportion, and Rate definition of, Glossary of Statistical Terms formula for, Ratio, Proportion, and Rate large-sample Z tests for, Proportions: The Large Sample Case–Proportions: The Large Sample Case prospective cohort study, Basic Vocabulary prospective study, Basic Vocabulary, Glossary of Statistical Terms proxy measurement, Proxy Measurement–Proxy Measurement, Glossary of Statistical Terms pseudo-chance-level parameter, Item Response Theory psychological and educational statistics, Educational and Psychological Statistics–Educational and Psychological Statistics, Percentiles–Percentiles, Standardized Scores–Standardized Scores, Test Construction–Test Construction, Classical Test Theory: The True Score Model–Classical Test Theory: The True Score Model, Reliability of a Composite Test–Reliability of a Composite Test, Measures of Internal Consistency–Coefficient Alpha, Item Analysis–Item Analysis, Item Response Theory–Item Response Theory about, Educational and Psychological Statistics–Educational and Psychological Statistics classical test theory, Classical Test Theory: The True Score Model–Classical Test Theory: The True Score Model item analysis, Item Analysis–Item Analysis item response theory, Item Response Theory–Item Response Theory measures of internal consistency, Measures of Internal Consistency–Coefficient Alpha percentiles, Percentiles–Percentiles reliability of composite test, Reliability of a Composite Test–Reliability of a Composite Test standardized scores, Standardized Scores–Standardized Scores test construction, Test Construction–Test Construction psychometrics, Educational and Psychological Statistics publication bias, Quick Checklist Q quadratic regression model, Polynomial Regression–Polynomial Regression Quality Improvement (QI), Quality Improvement–Run Charts and Control Charts quasi-experimental, Basic Vocabulary–Basic Vocabulary, Quasi-Experimental Studies–Quasi-Experimental Studies research design type, Basic Vocabulary–Basic Vocabulary studies, Quasi-Experimental Studies–Quasi-Experimental Studies quota sampling, Nonprobability Sampling R R programming language, Graphic Methods, R–R random errors, Random and Systematic Error–Random and Systematic Error, Glossary of Statistical Terms definition of, Glossary of Statistical Terms vs. systematic errors, Random and Systematic Error–Random and Systematic Error random measurement error, Classical Test Theory: The True Score Model–Classical Test Theory: The True Score Model Random-Digit-Dialing (RDD) techniques, Bias in Sample Selection and Retention randomization, Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio randomized block design, Blocking and the Latin Square range, Glossary of Statistical Terms range and interquartile range, The Range and Interquartile Range–The Range and Interquartile Range rank sum, The Wilcoxon Rank Sum Test Rasch model, Item Response Theory Rasch, Georg, Item Response Theory rate, Ratio, Proportion, and Rate–Ratio, Proportion, and Rate, Crude, Category-Specific, and Standardized Rates–Crude, Category-Specific, and Standardized Rates, Glossary of Statistical Terms about, Ratio, Proportion, and Rate–Ratio, Proportion, and Rate crude rate as, Crude, Category-Specific, and Standardized Rates–Crude, Category-Specific, and Standardized Rates definition of, Glossary of Statistical Terms ratio, Ratio, Proportion, and Rate, Glossary of Statistical Terms about, Ratio, Proportion, and Rate definition of, Glossary of Statistical Terms ratio data, Ratio Data–Ratio Data, Glossary of Statistical Terms about, Ratio Data–Ratio Data definition of, Glossary of Statistical Terms raw time series, Time Series real numbers, properties of, Properties of Real Numbers recall bias, Information Bias, Glossary of Statistical Terms rectangular coordinates (Cartesian coordinates), Graphing Equations–Graphing Equations rectangular data file, storing data electronically in, Codebooks–The Rectangular Data File regression, Independent and Dependent Variables–Independent and Dependent Variables, Introduction to Regression and ANOVA, Linear Regression–Linear Regression, Assumptions–Assumptions, Calculating Simple Regression by Hand–Calculating Simple Regression by Hand, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Dummy Variables–Dummy Variables, Methods for Building Regression Models–Backward removal, Logistic Regression–Converting Logits to Probabilities, Multinomial Logistic Regression–Multinomial Logistic Regression, Polynomial Regression–Polynomial Regression, Polynomial Regression–Polynomial Regression, Polynomial Regression–Polynomial Regression, Overfitting–Overfitting, Quasi-Experimental Studies about, Introduction to Regression and ANOVA arbitrary curve-fitting, Overfitting–Overfitting calculating by hand, Calculating Simple Regression by Hand–Calculating Simple Regression by Hand cubic regression model, Polynomial Regression–Polynomial Regression independent variables and dependent variables, Independent and Dependent Variables–Independent and Dependent Variables linear, Linear Regression–Linear Regression, Assumptions–Assumptions about, Linear Regression–Linear Regression assumptions, Assumptions–Assumptions logistic, Logistic Regression–Converting Logits to Probabilities modeling principles, Multiple Regression Models–Multiple Regression Models multinomial logistic, Multinomial Logistic Regression–Multinomial Logistic Regression multiple linear, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Dummy Variables–Dummy Variables, Methods for Building Regression Models–Backward removal about, Multiple Regression Models–Multiple Regression Models adding interaction term, Multiple Regression Models–Multiple Regression Models assumptions, Multiple Regression Models creating a correlation matrix, Multiple Regression Models–Multiple Regression Models dummy variables, Dummy Variables–Dummy Variables methods for building regression models, Methods for Building Regression Models–Backward removal regression equation for data, Multiple Regression Models–Multiple Regression Models results for individual predictors, Multiple Regression Models–Multiple Regression Models standardized coefficients, Multiple Regression Models variables in model, Multiple Regression Models–Multiple Regression Models polynomial, Polynomial Regression–Polynomial Regression quadratic regression model, Polynomial Regression–Polynomial Regression to the mean, Quasi-Experimental Studies regression equations, independent variables and dependent variables in, Independent and Dependent Variables–Independent and Dependent Variables regression to the mean, Quasi-Experimental Studies related samples t-test, Repeated Measures t-Test–Confidence Interval for the Repeated Measures t-Test relational databases, for data management, Spreadsheets and Relational Databases–Spreadsheets and Relational Databases relative frequency, Frequency Tables, Bar Charts–Bar Charts relative risk, The Risk Ratio–The Risk Ratio reliability, Reliability and Validity–Triangulation, Reliability–Reliability, Glossary of Statistical Terms about, Reliability–Reliability definition of, Glossary of Statistical Terms validity and, Reliability and Validity–Triangulation reliability coefficient, Reliability of a Composite Test reliability index, Reliability of a Composite Test repeated measures (related samples) t-test, Repeated Measures t-Test–Confidence Interval for the Repeated Measures t-Test research articles, Writing the Article–Writing the Article, Common Problems–Common Problems, Quick Checklist–Quick Checklist, Issues in Research Design–The Power of Coincidence, Descriptive Statistics–Extrapolation and Trends, Extrapolation and Trends–Linear regression checklist for statistics based investigations, Quick Checklist–Quick Checklist common problems with, Common Problems–Common Problems critiquing descriptive statistics, Descriptive Statistics–Extrapolation and Trends incorrect use of tests in inferential statistics, Extrapolation and Trends–Linear regression issues in research design, Issues in Research Design–The Power of Coincidence writing, Writing the Article–Writing the Article research design, Research Design, Basic Vocabulary–Basic Vocabulary, Basic Vocabulary, Basic Vocabulary, Basic Vocabulary, Basic Vocabulary, Basic Vocabulary, Basic Vocabulary, Observational Studies–Observational Studies, Quasi-Experimental Studies–Quasi-Experimental Studies, Experimental Studies–Experimental Studies, Ingredients of a Good Design–Ingredients of a Good Design, Gathering Experimental Data–Blocking and the Latin Square, Specifying Treatment Levels, Specifying Response Variables, Blinding, Retrospective Adjustment, Blocking and the Latin Square, Example Experimental Design–Example Experimental Design, Communicating with Statistics–Writing for Your Workplace, Issues in Research Design–The Power of Coincidence about, Research Design blinding, Blinding blocking and Latin square, Blocking and the Latin Square classification of studies, Basic Vocabulary communicating with statistics, Communicating with Statistics–Writing for Your Workplace data types, Basic Vocabulary example of, Example Experimental Design–Example Experimental Design experimental studies, Experimental Studies–Experimental Studies factor in, Basic Vocabulary factorial design, Basic Vocabulary gathering experimental data, Gathering Experimental Data–Blocking and the Latin Square hypothesis testing vs. data mining, Specifying Response Variables ingredients of good design, Ingredients of a Good Design–Ingredients of a Good Design issues in, Issues in Research Design–The Power of Coincidence observational studies, Observational Studies–Observational Studies physical vs. social sciences definition of treatments, Specifying Treatment Levels quasi-experimental studies, Quasi-Experimental Studies–Quasi-Experimental Studies retrospective adjustment, Retrospective Adjustment style of notation, Basic Vocabulary types of, Basic Vocabulary–Basic Vocabulary unit of analysis in study, Basic Vocabulary response variables, specifying in experimental design, Specifying Response Variables–Specifying Response Variables responses, experimental, Experimental Studies restriction, Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio results section, Writing the Article, Evaluating the Whole Article critiquing in articles, Evaluating the Whole Article writing, Writing the Article retrospective adjustment, Retrospective Adjustment retrospective study, Basic Vocabulary, Glossary of Statistical Terms risk ratio, The Risk Ratio–Attributable Risk, Attributable Risk Percentage, and Number Needed to Treat Robinson, W.S., Basic Vocabulary rolling average, Time Series roots, properties of, Properties of Roots–Properties of Roots Rosenbaum, Paul, Observational Studies Rubin, Donald, Observational Studies Rubin, Roderick J.A., String and Numeric Data–Missing Data run charts and control charts, Run Charts and Control Charts R×C table (contingency table), The R×C Table–The R×C Table S Safari Books Online, Safari® Books Online sample size calculations, Sample Size Calculations–Power for the Test of the Difference between Two Sample Means (Independent Samples t-Test) sample space, definition of, Sample Space–Sample Space samples, Inferential Statistics, Inferential Statistics, Populations and Samples–Probability Sampling, Populations and Samples, Descriptive Statistics and Graphic Displays–Populations and Samples, The Variance and Standard Deviation, The Variance and Standard Deviation, The One-Sample t-Test–Confidence Interval for the One-Sample t-Test, The Independent Samples t-Test–Confidence Interval for the Independent Samples t-Test, Repeated Measures t-Test–Confidence Interval for the Repeated Measures t-Test in descriptive statistics, Descriptive Statistics and Graphic Displays–Populations and Samples, The Variance and Standard Deviation, The Variance and Standard Deviation calculating variance, The Variance and Standard Deviation formula for standard deviation, The Variance and Standard Deviation populations and, Descriptive Statistics and Graphic Displays–Populations and Samples in inferential statistics, Inferential Statistics, Inferential Statistics, Populations and Samples–Probability Sampling mean, Inferential Statistics populations and, Inferential Statistics, Populations and Samples–Probability Sampling one-sample t-test, The One-Sample t-Test–Confidence Interval for the One-Sample t-Test related samples t-test, Repeated Measures t-Test–Confidence Interval for the Repeated Measures t-Test two-sample t-test, The Independent Samples t-Test–Confidence Interval for the Independent Samples t-Test U.S.

In this case, the probability of getting 8, 9, or 10 heads in 10 flips of a fair coin is 0.0439 + 0.0098 + 0.0010, or 0.0547. This is the p-value for the result of at least 8 heads in 10 trials, using a coin where P(heads) = 0.5. p-values are commonly reported for most research results involving statistical calculations, in part because intuition is a poor guide to how unusual a particular result is. For instance, many people might think it is unusual to get 8 or more heads on 10 trials using a fair coin. There is no statistical definition of what constitutes “unusual” results, so we will use the common standard that the p-value for our results must be less than 0.05 for us to reject the null hypothesis (which is, in this case, that the coin is fair). In this example, somewhat surprisingly, this standard is not met. The p-value for our result (8 heads in 10 trials) does not allow us to reject the null hypothesis that the coin is fair, that is, that P(heads) = 0.5, because 0.0547 is greater than 0.05.


pages: 408 words: 85,118

Python for Finance by Yuxing Yan

asset-backed security, business cycle, business intelligence, capital asset pricing model, constrained optimization, correlation coefficient, distributed generation, diversified portfolio, implied volatility, market microstructure, P = NP, p-value, quantitative trading / quantitative finance, Sharpe ratio, time value of money, value at risk, volatility smile, zero-sum game

Then, we conduct two tests: test whether the mean is 0.5, and test whether the mean is zero: >>>from scipy import stats >>>np.random.seed(1235) >>>x = stats.norm.rvs(size=10000) >>>print("T-value P-value (two-tail)") >>>print(stats.ttest_1samp(x,5.0)) >>>print(stats.ttest_1samp(x,0)) T-value P-value (two-tail) [ 193 ] Statistical Analysis of Time Series (array(-495.266783341032), 0.0) (array(-0.26310321925083124), 0.79247644375164772) >>> For the first test, in which we test whether the time series has a mean of 0.5, we reject the null hypothesis since the T-value is 495.2 and the P-value is 0. For the second test, we accept the null hypothesis since the T-value is close to -0.26 and the P-value is 0.79. In the following program, we test whether the mean daily returns from IBM in 2013 is zero: from scipy import stats from matplotlib.finance import quotes_historical_yahoo ticker='ibm' begdate=(2013,1,1) enddate=(2013,11,9) p=quotes_historical_yahoo(ticker,begdate,enddate,asobject=True, adjusted=True) ret=(p.aclose[1:] - p.aclose[:-1])/p.aclose[:-1] print(' Mean T-value P-value ' ) print(round(mean(ret),5), stats.ttest_1samp(ret,0)) Mean T-value P-value (-0.00024, (array(-0.296271094280657), 0.76730904089713181)) From the previous results, we know that the average daily returns for IBM is 0.0024 percent.

In the following program, we test whether the mean daily returns from IBM in 2013 is zero: from scipy import stats from matplotlib.finance import quotes_historical_yahoo ticker='ibm' begdate=(2013,1,1) enddate=(2013,11,9) p=quotes_historical_yahoo(ticker,begdate,enddate,asobject=True, adjusted=True) ret=(p.aclose[1:] - p.aclose[:-1])/p.aclose[:-1] print(' Mean T-value P-value ' ) print(round(mean(ret),5), stats.ttest_1samp(ret,0)) Mean T-value P-value (-0.00024, (array(-0.296271094280657), 0.76730904089713181)) From the previous results, we know that the average daily returns for IBM is 0.0024 percent. The T-value is -0.29 while the P-value is 0.77. Thus, the mean is statistically not different from zero. Tests of equal means and equal variances Next, we test whether two variances for IBM and DELL in 2013 are equal or not. The function called sp.stats.bartlet performs Bartlett's test for equal variances with a null hypothesis that all input samples are from populations with equal variances. The outputs are T-value and P-value: import scipy as sp from matplotlib.finance import quotes_historical_yahoo begdate=(2013,1,1) enddate=(2013,11,9) def ret_f(ticker,begdate,enddate): [ 194 ] Chapter 8 p = quotes_historical_yahoo(ticker,begdate, enddate,asobject=True,ad justed=True) return((p.open[1:] - p.open[:-1])/p.open[:-1]) y=ret_f('IBM',begdate,enddate) x=ret_f('DELL',begdate,enddate) print(sp.stats.bartlett(x,y)) (5.1377132006045105, 0.023411467035559311) With a T-value of 5.13 and a P-value of 2.3 percent, we conclude that these two stocks will have different variances for their daily stock returns in 2013 if we choose a significant level of 5 percent.

The following is the related Python code: import numpy as np import statsmodels.api as sm import scipy as sp def breusch_pagan_test(y,x): results=sm.OLS(y,x).fit() resid=results.resid [ 356 ] Chapter 12 n=len(resid) sigma2 = sum(resid**2)/n f = resid**2/sigma2 - 1 results2=sm.OLS(f,x).fit() fv=results2.fittedvalues bp=0.5 * sum(fv**2) df=results2.df_model p_value=1-sp.stats.chi.cdf(bp,df) return round(bp,6), df, round(p_value,7) sp.random.seed(12345) n=100 x=[] error1=sp.random.normal(0,1,n) error2=sp.random.normal(0,2,n) for i in range(n): if i%2==1: x.append(1) else: x.append(-1) y1=x+np.array(x)+error1 y2=zeros(n) for i in range(n): if i%2==1: y2[i]=x[i]+error1[i] else: y2[i]=x[i]+error2[i] print ('y1 vs. x (we expect to accept the null hypothesis)') bp=breusch_pagan_test(y1,x) print('BP value, df, p-value') print 'bp =', bp bp=breusch_pagan_test(y2,x) [ 357 ] Volatility Measures and GARCH print ('y2 vs. x (we expect to rject the null hypothesis)') print('BP value, df, p-value') print('bp =', bp) For the result of running regression by using y1 against x, we know that its residual vale would be homogeneous, that is, variance or standard deviation is a constant.


Commodity Trading Advisors: Risk, Performance Analysis, and Selection by Greg N. Gregoriou, Vassilios Karavas, François-Serge Lhabitant, Fabrice Douglas Rouah

Asian financial crisis, asset allocation, backtesting, buy and hold, capital asset pricing model, collateralized debt obligation, commodity trading advisor, compound rate of return, constrained optimization, corporate governance, correlation coefficient, Credit Default Swap, credit default swaps / collateralized debt obligations, discrete time, distributed generation, diversification, diversified portfolio, dividend-yielding stocks, fixed income, high net worth, implied volatility, index arbitrage, index fund, interest rate swap, iterative process, linear programming, London Interbank Offered Rate, Long Term Capital Management, market fundamentalism, merger arbitrage, Mexican peso crisis / tequila crisis, p-value, Pareto efficiency, Ponzi scheme, quantitative trading / quantitative finance, random walk, risk-adjusted returns, risk/return, selection bias, Sharpe ratio, short selling, stochastic process, survivorship bias, systematic trading, technology bubble, transaction costs, value at risk, zero-sum game

−6.7692 −3.9901 −5.1742 −4.8709 −4.2616 −4.1900 −5.3569 −4.3574 −5.3094 −4.7461 ADF Tests CTA Excess Returns: ARMA Models CTA Exc1 CTA Exc1: p-value CTA Exc2 CTA Exc2: p-value CTA Exc3 CTA Exc3: p-value CTA Exc4 CTA Exc4: p-value CTA Exc5 CTA Exc5: p-value CTA Exc6 CTA Exc6: p-value CTA Exc7 CTA Exc7: p-value CTA Exc8 CTA Exc8: p-value CTA Exc9 CTA Exc9: p-value CTA Exc10 CTA Exc10: p-value TABLE 21.4 0.00 0.01 0.11 0.07 0.03 0.12 0.07 0.01 R2 0.20 0.9248 0.16 0.0000 MA(4) B4 376 TABLE 21.5 PROGRAM EVALUATION, SELECTION, AND RETURNS CTA Returns, 2000 to 2003: ARMA Models M CTA3 CTA3: p-value CTA4 CTA4: p-value CTA8 CTA8: p-value 0.0123 0.0895 0.0050 0.0288 0.0120 0.0831 AR(1) A1 AR(2) A2 −0.8042 0.0000 −0.5734 0.0126 −0.7018 0.0000 −0.6546 0.0000 0.0956 0.5748 −0.1482 0.3521 MA(1) A1 0.9994 0.0000 0.8731 0.0000 0.9529 0.0000 MA(2) A2 0.9800 0.0000 R2 0.16 0.04 0.09 there is a significant improvement for CTA #3 and #8 (evidenced by the increased R2).

As shown in Table 21.5, 374 0.0144 0.0006 0.0015 0.2288 0.0141 0.0055 0.0065 0.0000 0.0096 0.0034 0.0138 0.0005 0.0111 0.0000 0.0097 0.0000 0.0160 0.0003 0.0098 0.0006 M −0.8778 0.0000 −0.5106 0.0000 −0.7910 0.0000 −0.1493 0.1632 −0.9215 −0.0000 −0.4447 0.0000 −0.5618 0.0000 1.1231 0.0000 −0.8322 0.0000 −0.4473 −0.0000 −0.9249 0.0000 −0.8566 0.0000 0.9402 0.0000 −0.8977 0.0000 −1.5509 0.0000 −1.3294 0.0000 −0.1479 0.1546 AR(2) A2 AR(1) A1 −0.5811 0.0022 0.7482 0.0000 AR(3) A3 −0.2769 0.0039 AR(4) A4 0.4511 0.0000 0.6638 0.0000 −1.1684 0.0000 −0.8378 0.0000 0.5598 0.0000 0.9801 0.0000 0.9740 0.0000 −0.9814 0.0000 −1.1274 0.0000 1.3508 0.0000 MA(1) B1 0.9344 0.0000 1.0430 0.0000 0.9799 0.0000 −0.6249 0.0000 0.9799 0.0000 MA(2) B2 All ADF tests are at 99 percent confidence level. CTA3 rejects hypothesis of unit root at 90 percent. −4.5596 −5.1140 −5.4682 −4.7019 −4.9529 −4.9350 −5.7926 −3.4275 −5.6161 −5.6629 ADF Tests CTA Returns: ARMA Models CTA1 CTA1: p-value CTA2 CTA2: p-value CTA3 CTA3: p-value CTA4 CTA4: p-value CTA5 CTA5: p-value CTA6 CTA6: p-value CTA7 CTA7: p-value CTA8 CTA8: p-value CTA9 CTA9: p-value CTA10 CTA10: p-value TABLE 21.3 0.1529 0.0000 −0.9581 0.0000 MA(3) B3 0.15 0.20 0.06 0.03 0.01 0.15 0.03 0.06 0.31 0.04 R2 0.34 0.93 1.49 0.20 1.40 0.22 2.25 0.06 3.95 0.01 0.23 0.97 1.95 0.13 0.53 0.66 2.21 0.07 Chow F-Stat p-value 375 0.0059 0.0259 −0.0043 0.0178 0.0051 0.3588 −0.0012 0.7033 0.0025 0.3855 0.0046 0.0473 0.0051 0.0572 0.0014 0.6276 0.0100 0.0000 0.0016 0.2435 M −0.7203 0.0191 −0.7132 0.0000 −0.5293 0.0000 −0.3677 0.0000 1.0716 0.0000 −0.5997 0.0000 −0.7890 0.0039 −0.4560 0.1771 0.5498 0.0000 0.7768 0.0000 AR(1) A1 0.9293 0.0000 −0.5202 0.0004 −0.7877 0.0000 −0.8945 0.0000 −0.7539 0.0000 −0.4724 0.0000 −0.5644 0.0376 AR(2) A2 AR(4) A4 MA(1) B1 MA(2) B2 MA(3) B3 0.7109 0.0262 −0.8592 0.0000 0.5947 0.9800 0.0000 0.0000 −0.4187 0.9617 0.0000 0.0000 −1.2220 0.9638 0.0000 0.0000 −0.7067 0.5722 0.5721 0.9661 0.0000 0.0000 0.0000 0.0000 −0.8271 0.6983 0.0009 0.0053 0.5768 0.0706 0.1356 −0.6643 −0.4929 − 1.0160 −0.4115 0.2974 0.0000 0.0000 0.0000 0.0000 −1.1091 0.3889 0.0000 0.0300 AR(3) A3 All ADF tests are at 99 percent confidence level.

The Spearman correlation coefficients show some ability to detect persistence when large TABLE 3.4 EGR Performance Persistence Results from Monte Carlo Generated Data Sets: No Persistence Present by Restricting a = 1 Data Generation Method Generated Data Subgroups Mean returns top 1/3 middle 1/3 bottom 1/3 top 3 bottom 3 p-values reject-positive z reject-negative z fail to reject test of 2 means reject-positive reject-negative fail to reject 1a 2b 3c 1.25 1.25 1.25 1.25 1.26 1.25 1.25 1.22 1.15 1.19 0.70 0.72 0.68 0.61 0.68 0.021 0.028 0.951 0.041 0.037 0.922 0.041 0.039 0.920 0.026 0.028 0.946 0.032 0.020 0.948 0.032 0.026 0.942 generated using a = 1, b = .5; s = 2. generated using a = 1, b = .5; s = 5, 10, 15, 20. cData generated using a = 1, b = .5, 1, 1.5, 1; s = 5, 10, 15, 20. aData bData 37 Performance of Managed Futures TABLE 3.5 EGR Performance Persistence Results from Monte Carlo Generated Data Sets: Persistence Present by Allowing a to Vary Data Generation Method Generated Data Subgroups Mean returns top 1/3 middle 1/3 bottom 1/3 top 3 bottom 3 p-values reject-positive z reject-negative z fail to reject.000 test of 2 means reject-positive reject-negative fail to reject.000 1a 2b 3c 4d 3.21 1.87 0.80 4.93 −1.60 2.77 2.09 1.41 3.47 1.14 2.57 1.85 1.15 3.26 0.86 1.48 1.30 1.14 1.68 1.06 1.000 0.000 0.000 0.827 0.000 0.173 0.823 0.000 0.177 0.149 0.003 0.848 1.00 0.000 0.000 0.268 0.000 0.732 0.258 0.000 0.742 0.043 0.012 0.945 generated using a = N(1.099,4.99); b = .5, 1, 1.5, 2; s = 2. generated using a = N(1.099,4.99); b = .5; s = 5, 10, 15, 20. cData generated using a = N(1.099,4.99); b = .5, 1, 1.5, 2; s = 5, 10, 15, 20. dData generated using a = N(1.099,1); b = .5, 1, 1.5, 2; s = 5, 10, 15, 20. aData bData differences are found in CTA data.


pages: 681 words: 64,159

Numpy Beginner's Guide - Third Edition by Ivan Idris

algorithmic trading, business intelligence, Conway's Game of Life, correlation coefficient, Debian, discrete time, en.wikipedia.org, general-purpose programming language, Khan Academy, p-value, random walk, reversible computing, time value of money

It is instructve to see what happens if we generate more points, because if we generate more points, we should have a more normal distributon. For 900,000 points, we get a p-value of 0.16 . For 20 generated values, the p-value is 0.50 . 4. Kurtosis tells us how curved a probability distributon is. Perform a kurtosis test. This test is set up similarly to the skewness test, but, of course, applies to kurtosis: print("Kurtosistest", "pvalue", stats.kurtosistest(generated)) The result of the kurtosis test appears as follows: Kurtosistest pvalue (1.3065381019536981, 0.19136963054975586) The p-value for 900,000 values is 0.028 . For 20 generated values, the p-values is 0.88 . 5. A normality test tells us how likely it is that a dataset complies the normal distributon. Perform a normality test. This test also returns two values, of which the second is a p-value: print("Normaltest", "pvalue", stats.normaltest(generated)) The result of the normality test appears as follows: Normaltest pvalue (2.09293921181506, 0.35117535059841687) The p-value for 900,000 generated values is 0.035 .

This basically gives the mean and standard deviaton of the dataset: print("Mean", "Std", stats.norm.fit(generated)) The mean and standard deviaton appear as follows: Mean Std (0.0071293257063200707, 0.95537708218972528) 3. Skewness tells us how skewed (asymmetric) a probability distributon is (see http://en.wikipedia.org/wiki/Skewness ). Perform a skewness test. This test returns two values. The second value is the p-value —the probability that the skewness of the dataset does not correspond to a normal distributon. Generally speaking, the p-value is the probability of an outcome different than what was expected given the null hypothesis—in this case, the probability of getting a skewness different from that of a normal distribution (which is 0 because of symmetry). P-values range from 0 to 1 : print("Skewtest", "pvalue", stats.skewtest(generated)) The result of the skewness test appears as follows: Skewtest pvalue (-0.62120640688766893, 0.5344638245033837) So, there is a 53 percent chance we are not dealing with a normal distributon.

Compute the log returns by taking the natural logarithm of the close price and then taking the diference of consecutve values: spy = np.diff(np.log(get_close("SPY"))) dia = np.diff(np.log(get_close("DIA"))) 3. The means comparison test checks whether two diferent samples could have the same mean value. Two values are returned, of which the second is the p-value from 0 to 1 : print("Means comparison", stats.ttest_ind(spy, dia)) The result of the means comparison test appears as follows: Means comparison (-0.017995865641886155, 0.98564930169871368) So, there is about a 98 percent chance that the two samples have the same mean log return. Actually, the documentaton has the following to say: If we observe a large p-value, for example, larger than 0.05 or 0.1, then we cannot reject the null hypothesis of identical average scores. If the p-value is smaller than the threshold, e.g. 1%, 5% or 10%, then we reject the null hypothesis of equal averages. 4. The Kolmogorov–Smirnov two samples test tells us how likely it is that two samples are drawn from the same distributon: print("Kolmogorov smirnov test", stats.ks_2samp(spy, dia)) Again, two values are returned, of which the second value is the p-value: Kolmogorov smirnov test (0.063492063492063516, 0.67615647616238039) 5.


pages: 446 words: 102,421

Network Security Through Data Analysis: Building Situational Awareness by Michael S Collins

business process, cloud computing, create, read, update, delete, Firefox, general-purpose programming language, index card, Internet Archive, inventory management, iterative process, p-value, Parkinson's law, peer-to-peer, slashdot, statistical model, zero day

In statistical testing, this is done by using a p-value. The p-value is the probability that if the null hypothesis is true, you will get a result at least as extreme as the observed results. The lower the p-value, the lower the probability that the observed result could have occurred under the null hypothesis. Conventionally, a null hypothesis is rejected when the p-value is below 0.05. To understand the concept of extremity here, consider a binomial test with no successes and four coin flips. In R: > binom.test(0,4,p=0.5) Exact binomial test data: 0 and 4 number of successes = 0, number of trials = 4, p-value = 0.125 alternative hypothesis: true probability of success is not equal to 0.5 95 percent confidence interval: 0.0000000 0.6023646 sample estimates: probability of success 0 That p-value of 0.125 is the sum of the probabilities that a coin flip was four heads (0.0625) AND four tails (also 0.0625).

. > # Note that I use punif to get the distribution and pass in the same > # parameters as I would if I were calling punif on its own > ks.test(a.set, punif, min=10, max=20) One-sample Kolmogorov-Smirnov test data: a.set D = 0.0862, p-value = 0.447 alternative hypothesis: two-sided > # I need an estimate before using the test. > # For the uniform, I can use min and max, like I'd use mean and sd for > # the normal > ks.test(a.set,punif,min=min(a.set),max=max(a.set)) One-sample Kolmogorov-Smirnov test data: a.set D = 0.0829, p-value = 0.4984 alternative hypothesis: two-sided > # Now one where I reject the null; I'll treat the data as if it > # were normally distributed and estimate again > ks.test(a.set,pnorm,mean=mean(a.set),sd=sd(a.set)) One-sample Kolmogorov-Smirnov test data: a.set D = 0.0909, p-value = 0.3806 alternative hypothesis: two-sided > #Hmm, p-value's high... Because I'm not using enough samples, let's > # do this again with 400 samples each. > a.set<-runif(400,min=10,max=20) > b.set<-runif(400,min=10,max=20) > # Compare against each other > ks.test(a.set,b.set)$p.value [1] 0.6993742 > # Compare against the distribution > ks.test(a.set,punif,min=min(a.set),max=max(a.set))$p.value [1] 0.5499412 > # Compare against a different distribution > ks.test(a.set,pnorm, mean = mean(a.set),sd=sd(a.set))$p.value [1] 0.001640407 The KS test has weak power.

In R: > binom.test(0,4,p=0.5) Exact binomial test data: 0 and 4 number of successes = 0, number of trials = 4, p-value = 0.125 alternative hypothesis: true probability of success is not equal to 0.5 95 percent confidence interval: 0.0000000 0.6023646 sample estimates: probability of success 0 That p-value of 0.125 is the sum of the probabilities that a coin flip was four heads (0.0625) AND four tails (also 0.0625). The p value is, in this context “two tailed,” meaning that it accounts for both extremes. Similarly, if we account for one heads: > binom.test(1,4,p=0.5) Exact binomial test data: 1 and 4 number of successes = 1, number of trials = 4, p-value = 0.625 alternative hypothesis: true probability of success is not equal to 0.5 95 percent confidence interval: 0.006309463 0.805879550 sample estimates: probability of success 0.25 The p-value is 0.625, the sum of 0.0625 + 0.25 + 0.25 + 0.0625 (everything but the probability of 2 heads and 2 tails). Testing Data One of the most common tests to do with R is to test whether or not a particular dataset matches a distribution. For information security and anomaly detection, having data that follows a distribution enables us to estimate thresholds for alarms.


pages: 284 words: 79,265

The Half-Life of Facts: Why Everything We Know Has an Expiration Date by Samuel Arbesman

Albert Einstein, Alfred Russel Wallace, Amazon Mechanical Turk, Andrew Wiles, bioinformatics, British Empire, Cesare Marchetti: Marchetti’s constant, Chelsea Manning, Clayton Christensen, cognitive bias, cognitive dissonance, conceptual framework, David Brooks, demographic transition, double entry bookkeeping, double helix, Galaxy Zoo, guest worker program, Gödel, Escher, Bach, Ignaz Semmelweis: hand washing, index fund, invention of movable type, Isaac Newton, John Harrison: Longitude, Kevin Kelly, life extension, Marc Andreessen, meta analysis, meta-analysis, Milgram experiment, Nicholas Carr, P = NP, p-value, Paul Erdős, Pluto: dwarf planet, publication bias, randomized controlled trial, Richard Feynman, Rodney Brooks, scientific worldview, social graph, social web, text mining, the scientific method, Thomas Kuhn: the structure of scientific revolutions, Thomas Malthus, Tyler Cowen: Great Stagnation

On the other hand, imagine if we had gathered a much larger group and still had the same fractions: Out of 500 left-handers, 300 carried L, while out of 500 right-handers, only 220 were carriers for L. If we ran the exact same test, we get a much lower p-value. Now it’s less than 0.0001. This means that there is less than one hundredth of 1 percent chance that the differences are due to chance alone. The larger the sample we get, the better we can test our questions. The smaller the p-value, the more robust our findings. But to publish a result in a scientific journal, you don’t need a minuscule p-value. In general, you need a p-value less than 0.05 or, sometimes, 0.01. For 0.05, this means that there is a one in twenty probability that the result being reported is in fact not real! Comic strip writer Randall Munroe illustrated some of the failings of this threshold for scientific publication: The comic shows some scientists testing whether jelly beans cause acne.

IF you ever delve a bit below the surface when reading about a scientific result, you will often bump into the term p-value. P-values are an integral part of determining how new knowledge is created. More important, they give us a way of estimating the possibility of error. Anytime a scientist tries to discover something new or validate an exciting and novel hypothesis, she tests it against something else. Specifically, our scientist tests it against a version of the world where the hypothesis would not be true. This state of the world, where our intriguing hypothesis is not true and all that we see is exactly just as boring as we pessimistically expect, is known as the null hypothesis. Whether the world conforms to our exciting hypothesis or not can be determined by p-values. Let’s use an example. Imagine we think that a certain form of a gene—let’s call it L—is more often found in left-handed people than in right-handed people, and is therefore associated with left-handedness.

The science of statistics is designed to answer this question by asking it in a more precise fashion: What is the chance that there actually is an equal frequency of left-handers with L and right-handers with L, but we simply happened to get an uneven batch? We know that when flipping a coin ten times, we don’t necessarily get exactly five heads and five tails. The same is true in the null hypothesis scenario for our L experiment. Enter p-values. Using sophisticated statistical analyses, we can reduce this complicated question to a single number: the p-value. This provides us with the probability that our result, which appears to support our hypothesis, is simply due to chance. For example, using certain assumptions, we can calculate what the p-value is for the above results: 0.16, or 16 percent. What this means is that there is about a one in six chance that this result is simply due to sampling variation (getting a few more L left-handers and a few less L right-handed carriers than we expected, if they are of equal frequency).


pages: 205 words: 20,452

Data Mining in Time Series Databases by Mark Last, Abraham Kandel, Horst Bunke

call centre, computer vision, discrete time, G4S, information retrieval, iterative process, NP-complete, p-value, pattern recognition, random walk, sensor fusion, speech recognition, web application

Results of the CD hypothesis testing on the ‘Manufacturing’ database Month 1 2 3 4 5 6 CD XP eMK−1,K eMK−1 ,K−1 d H(95%) 1 − p-value 1 − p-value — 14.10% 11.70% 10.60% 11.90% 6.60% — 12.10% 10.40% 9.10% 10.10% 8.90% — 2.00% 1.30% 1.50% 1.80% 2.30% — 4.80% 3.40% 2.90% 2.80% 2.30% — 58.50% 54.40% 68.60% 78.90% 95.00% — 78.30% 98.80% 76.50% 100.00% 63.10% 99% 100% 100% 78% 1 – p value 60% 95% 76% 80% 79% 63% 69% 58% 54% 40% 20% 0% 2 3 4 month 5 6 CD XP Fig. 2. Summary of implementing the change detection methodology on ‘Manufacturing’ database (1 − p-value). Table 10. XP confidence level of all independent and dependent variables in ‘Manufacturing’ database (1 − p-value). Domain Month 2 Month 3 Month 4 Month 5 Month 6 CAT GRP MRKT Code Duration Time to operate Quantity Customer GRP 18 100% 100% 100% 100% 100% 19 100% 100% 99.8% 99.9% 100% 19 100% 100% 100% 100% 100% 19 100% 100% 100% 100% 100% 15 100% 100% 100% 100% 100% 18 100% 100% 100% 100% 100% Change Detection in Classification Models Induced from Time Series Data 119 According to the change detection methodology, during all six consecutive months there was no significant change in the rules describing the relationships between the candidate and the target variables (which is our main interest).

CD Change XP introduced eM K−1,K eM K−1,K−1 d H(95%) 1 − p-value 1 − p-value No No Yes No No Yes No — 20.30% 32.80% 26.60% 26.30% 18.80% 22.10% — 24.00% 19.80% 24.60% 26.40% 27.20% 22.00% — 3.70% 13.00% 2.00% 0.10% 8.40% 0.10% — 4.30% 3.20% 2.80% 2.50% 2.20% 2.00% — 91.90% 100.00% 88.20% 52.60% 100.00% 53.40% — 92.50% 100.00% 100.00% 99.90% 100.00% 52.80% Change Detection in Classification Models Induced from Time Series Data Table 7. 115 G. Zeira, O. Maimon, M. Last and L. Rokach Confidence level 116 93% 100% 100% 100% 100% 99.8% 100 % 100% 84 % 80% 76 % 60% 53 % 53 % 53 % 40% 20% 0% 2 3 4 5 6 7 period CD XP Fig. 1. Summary of implementing the change detection methodology on an artificially generated time series database (1 − p-value). Table 8. Influence of discarding the detected change (Illustration).

Outcomes of XP by validating the sixth month on the fifth and the first month in ‘Manufacturing’ database (p-value). Metric XP (1 − p-value) CAT GRP MRKT Code Duration Time to Operate Quantity Customer GRP Target 18 100% 19 100% 19 100% 19 100% 15 100% 18 100% 2 98.4% month 1 validated by month 6 100% 100% 100% 100% 100% 100% 100% months 1 to 5 validated by 100% 100% 100% 100% 100% 100% 63.1% domain month 5 validated by month 6 month 6 120 G. Zeira, O. Maimon, M. Last and L. Rokach 95.0% 88.5% 90.0% 85.0% 80.0% 76.1% 75.0% 70.0% months 1 to 5 validated by month 6 month 1 validated by month 6 month 5 validated by month 6 Fig. 3. CD confidence level (1 − p-value) outcomes of validating the sixth month on the fifth and the first month in ‘Manufacturing’ database.


pages: 982 words: 221,145

Ajax: The Definitive Guide by Anthony T. Holdener

AltaVista, Amazon Web Services, business process, centre right, create, read, update, delete, database schema, David Heinemeier Hansson, en.wikipedia.org, Firefox, full text search, game design, general-purpose programming language, Guido van Rossum, information retrieval, loose coupling, MVC pattern, Necker cube, p-value, Ruby on Rails, slashdot, sorting algorithm, web application

> This code is pretty self-explanatory, though I am introducing a little function to take care of quote issues with SQL injection attacks with the function quote_smart( ). The function looks like this: <?php /** * This function, quote_smart, tries to ensure that a SQL injection attack * cannot occur. * * @param {string} $p_value The string to quote correctly. * @return string The properly quoted string. */ function quote_smart($p_value) { /* Are magic quotes on? */ if (get_magic_quotes_gpc( )) $p_value = stripslashes($p_value); /* Is the value a string to quote? */ if (!is_numeric($p_value) || $p_value[0] == '0') $p_value = "'".mysql_real_escape_string($p_value)."'"; return ($p_value); } ?> The quote_smart( ) function I am using is one of many variants available on the Web from which you can choose. Just remember to protect your SQL from attacks. The next bit of functionality that the server must be able to handle is receiving new text to place in the messages queue on the server.

_options.id + '_img').style.backgroundPosition = ((this.checked) ? (-1 * this._options.width) : 0) + 'px 0'; } else $(this._options.id + '_img').style.backgroundPosition = (-1 * ((2 * this._options.width) + ((this.checked)) ? this._options.width : 0)) + 'px 0'; }, /** * This method, _toggleValue, * * @member customRadioCheckControl * @param {Boolean} p_value The optional value to set the control to. * @see #_positionImage * @see #onChange */ _toggleValue: function(p_value) { /* Was a /p_value/ passed to the method? */ if (p_value) this.checked = p_value; else this.checked = !this.checked; this._positionImage( ); this.onChange( ); }, /** * This method, _createEvents, sets an /onclick/ event on the custom control. * * @member customRadioCheckControl * @see Event#observe */ _createEvents: function( ) { /* Was an id passed? */ 502 | Chapter 14: A Funny Thing Happened on the Way to the Form Example 14-5.

It contains the following methods: * - set(p_name, p_value, p_expires) * - get(p_name) * - erase(p_name) * - accept( ) */ var Cookie = { /** * This method, set, creates a cookie with the name equal to /p_name/ with a * value of /p_value/ that expires at the specified /p_expires/ should it * exist, returning whether the cookie was created or not. * 376 | Chapter 11: Customizing the Client Example 11-3. cookie.js: A simple cookie object (continued) * @member Cookie * @param {String} p_name The name for the cookie to be set. * @param {String} p_value The value for the cookie to be set. * @param {Float} p_expires The time before the cookie to be set expires. * @return Returns whether the cookie was set or not. * @type Boolean */ set: function(p_name, p_value, p_expires) { /* The expires string for the cookie */ var expires = ''; /* Was an expires time sent to the method?


pages: 204 words: 58,565

Keeping Up With the Quants: Your Guide to Understanding and Using Analytics by Thomas H. Davenport, Jinho Kim

Black-Scholes formula, business intelligence, business process, call centre, computer age, correlation coefficient, correlation does not imply causation, Credit Default Swap, en.wikipedia.org, feminist movement, Florence Nightingale: pie chart, forensic accounting, global supply chain, Hans Rosling, hypertext link, invention of the telescope, inventory management, Jeff Bezos, Johannes Kepler, longitudinal study, margin call, Moneyball by Michael Lewis explains big data, Myron Scholes, Netflix Prize, p-value, performance metric, publish or perish, quantitative hedge fund, random walk, Renaissance Technologies, Robert Shiller, Robert Shiller, self-driving car, sentiment analysis, six sigma, Skype, statistical model, supply-chain management, text mining, the scientific method, Thomas Davenport

Rare or unusual data (often represented by a p-value below a specified threshold) is an indication that H0 is false, which constitutes a statistically significant result and support of the alternative hypothesis. Independent variable: A variable whose value is known and used to help predict or explain a dependent variable. For example, if you wish to predict the quality of a vintage wine using various predictors (average growing season temperature, harvest rainfall, winter rainfall, and the age of the vintage), the various predictors would serve as independent variables. Alternative names are explanatory variable, predictor variable, and regressor. p-value: When performing a hypothesis test, the p-value gives the probability of data occurrence under the assumption that H0 is true. Small p-values are an indication of rare or unusual data from H0, which in turn provides support that H0 is actually false (and thus support of the alternative hypothesis).

A value of 5 percent signifies that we need data that occurs less than 5 percent of the time from H0 (if H0 were indeed true) for us to doubt H0 and reject it as being true. In practice, this is often assessed by calculating a p-value; p-values less than alpha are indication that H0 is rejected and the alternative supported. t-test or student’s t-test: A test statistic that tests whether the means of two groups are equal, or whether the mean of one group has a specified value. Type I error or α error: This error occurs when the null hypothesis is true, but it is rejected. In traditional hypothesis testing, one rejects the null hypothesis if the p-value is smaller than the significance level α. So, the probability of incorrectly rejecting a true null hypothesis equals α and thus this error is also called α error. a. For the descriptions in this section, we’ve referred to the pertinent definitions in Wikipedia, Heinz Kohler’s Statistics for Business and Economics (2002), and Dell’s Analytics Cheat Sheet (2012, Tables 6 and 8)

This response would not only have been reassuring to the wife but persuasive to her husband as well. In statistical hypothesis testing, the probability of 0.003 calculated above is called the p-value—the probability of obtaining a test statistic (e.g., Z-value of 2.75 in this case) at least as extreme as the one that was actually observed (a pregnancy that would last at least ten months and five days), assuming that the null hypothesis is true. In this example the null hypothesis (H0) is “This baby is my husband’s.” In traditional hypothesis testing, one rejects the null hypothesis if the p-value is smaller than the significance level. In this case a p-value of 0.003 would result in the rejection of the null hypothesis even at the 1 percent significance level—typically the lowest level anyone uses. Normally, then, we reject the null hypothesis that this baby is the San Diego Reader’s husband’s baby.


pages: 451 words: 103,606

Machine Learning for Hackers by Drew Conway, John Myles White

call centre, centre right, correlation does not imply causation, Debian, Erdős number, Nate Silver, natural language processing, Netflix Prize, p-value, pattern recognition, Paul Erdős, recommendation engine, social graph, SpamAssassin, statistical model, text mining, the scientific method, traveling salesman

In practice, we personally tend to ignore this value because we think it’s slightly ad hoc, but there are many people who are very fond of it. Finally, the last piece of information you’ll see is the “F-statistic.” This is a measure of the improvement of your model over using just the mean to make predictions. It’s an alternative to “R-squared” that allows one to calculate a “p-value.” Because we think that a “p-value” is usually deceptive, we encourage you to not put too much faith in the F-statistic. “p-values” have their uses if you completely understand the mechanism used to calculate them, but otherwise they can provide a false sense of security that will make you forget that the gold standard of model performance is predictive power on data that wasn’t used to fit your model, rather than the performance of your model on the data that it was fit to.

The traditional cutoff for being confident that an input is related to your output is to find a coefficient that’s at least two standard errors away from zero. The next piece of information that summary spits out are the significance codes for the coefficients. These are asterisks shown along the side that are meant to indicate how large the “t value” is or how small the p-value is. Specifically, the asterisks tell you whether you’re passed a series of arbitrary cutoffs at which the p-value is less than 0.1, less than 0.05, less than 0.01, or less than 0.001. Please don’t worry about these values; they’re disturbingly popular in academia, but are really holdovers from a time when statistical analysis was done by hand rather than on a computer. There is literally no interesting content in these numbers that’s not found in asking how many standard errors your estimate is away from 0.

That function is conveniently called summary: summary(lm.fit) #Call: #lm(formula = log(PageViews) ~ log(UniqueVisitors), data = top.1000.sites) # #Residuals: # Min 1Q Median 3Q Max #-2.1825 -0.7986 -0.0741 0.6467 5.1549 # #Coefficients: # Estimate Std. Error t value Pr(>|t|) #(Intercept) -2.83441 0.75201 -3.769 0.000173 *** #log(UniqueVisitors) 1.33628 0.04568 29.251 < 2e-16 *** #--- #Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 # #Residual standard error: 1.084 on 998 degrees of freedom #Multiple R-squared: 0.4616, Adjusted R-squared: 0.4611 #F-statistic: 855.6 on 1 and 998 DF, p-value: < 2.2e-16 The first thing that summary tells us is the call we made to lm. This isn’t very useful when you’re working at the console, but it can be helpful when you’re working in larger scripts that make multiple calls to lm. When this is the case, this information helps you keep all of the models organized so you have a clear understanding of what data and variables went into each model. The next thing that summary tells us are the quantiles of the residuals that you would compute if you called quantile(residuals(lm.fit)).


pages: 764 words: 261,694

The Elements of Statistical Learning (Springer Series in Statistics) by Trevor Hastie, Robert Tibshirani, Jerome Friedman

Bayesian statistics, bioinformatics, computer age, conceptual framework, correlation coefficient, G4S, greed is good, linear programming, p-value, pattern recognition, random walk, selection bias, speech recognition, statistical model, stochastic process, The Wisdom of Crowds

Define n j o L = max j : p(j) < α · . M (18.44) 5*10^−4 5*10^−6 5*10^−5 p−value 5*10^−3 3. Reject all hypotheses H0j for which pj ≤ p(L) , the BH rejection threshold. • • • • •• •• ••••••• •••••• • • • • • •••• ••••• ••••••• • • • • • ••• •••• ••••••• • • • • • • •••••• •••••• • • • • •••• • •• • • 1 5 10 50 100 Genes ordered by p−value FIGURE 18.19. Microarray example continued. Shown is a plot of the ordered p-values p(j) and the line 0.15 · (j/12, 625), for the Benjamini–Hochberg method. The largest j for which the p-value p(j) falls below the line, gives the BH threshold. Here this occurs at j = 11, indicated by the vertical line. Thus the BH method calls significant the 11 genes (in red) with smallest p-values. 18.7 Feature Assessment and the Multiple-Testing Problem 689 Algorithm 18.3 The Plug-in Estimate of the False Discovery Rate. 1.

The Benjamini–Hochberg (BH) procedure is based on p-values; these can be obtained from an asymptotic approximation to the test statistic (e.g., Gaussian), or a permutation distribution, as is done here. If the hypotheses are independent, Benjamini and Hochberg (1995) show that regardless of how many null hypotheses are true and regardless of the distribution of the p-values when the null hypothesis is false, this procedure has the property FDR ≤ M0 α ≤ α. M (18.45) For illustration we chose α = 0.15. Figure 18.19 shows a plot of the ordered p-values p(j) , and the line with slope 0.15/12625. 688 18. High-Dimensional Problems: p ≫ N Algorithm 18.2 Benjamini–Hochberg (BH) Method. 1. Fix the false discovery rate α and let p(1) ≤ p(2) ≤ · · · ≤ p(M ) denote the ordered p-values 2. Define n j o L = max j : p(j) < α · .

K (18.40) k=1 13 Of course, 58 14 is a large number (around 10 ) and so we can’t enumerate all of the possible permutations. Instead we take a random sample of the possible permutations; here we took a random sample of K = 1000 permutations. To exploit the fact that the genes are similar (e.g., measured on the same scale), we can instead pool the results for all genes in computing the p-values. pj = M K 1 XX I(|tkj′ | > |tj |). MK ′ (18.41) j =1 k=1 This also gives more granular p-values than does (18.40), since there many more values in the pooled null distribution than there are in each individual null distribution. Using this set of p-values, we would like to test the hypotheses: H0j = treatment has no effect on gene j versus H1j = treatment has an effect on gene j (18.42) for all j = 1, 2, . . . , M . We reject H0j at level α if pj < α. This test has type-I error equal to α; that is, the probability of falsely rejecting H0j is α.


Super Thinking: The Big Book of Mental Models by Gabriel Weinberg, Lauren McCann

affirmative action, Affordable Care Act / Obamacare, Airbnb, Albert Einstein, anti-pattern, Anton Chekhov, autonomous vehicles, bank run, barriers to entry, Bayesian statistics, Bernie Madoff, Bernie Sanders, Black Swan, Broken windows theory, business process, butterfly effect, Cal Newport, Clayton Christensen, cognitive dissonance, commoditize, correlation does not imply causation, crowdsourcing, Daniel Kahneman / Amos Tversky, David Attenborough, delayed gratification, deliberate practice, discounted cash flows, disruptive innovation, Donald Trump, Douglas Hofstadter, Edward Lorenz: Chaos theory, Edward Snowden, effective altruism, Elon Musk, en.wikipedia.org, experimental subject, fear of failure, feminist movement, Filter Bubble, framing effect, friendly fire, fundamental attribution error, Gödel, Escher, Bach, hindsight bias, housing crisis, Ignaz Semmelweis: hand washing, illegal immigration, income inequality, information asymmetry, Isaac Newton, Jeff Bezos, John Nash: game theory, lateral thinking, loss aversion, Louis Pasteur, Lyft, mail merge, Mark Zuckerberg, meta analysis, meta-analysis, Metcalfe’s law, Milgram experiment, minimum viable product, moral hazard, mutually assured destruction, Nash equilibrium, Network effects, nuclear winter, offshore financial centre, p-value, Parkinson's law, Paul Graham, peak oil, Peter Thiel, phenotype, Pierre-Simon Laplace, placebo effect, Potemkin village, prediction markets, premature optimization, price anchoring, principal–agent problem, publication bias, recommendation engine, remote working, replication crisis, Richard Feynman, Richard Feynman: Challenger O-ring, Richard Thaler, ride hailing / ride sharing, Robert Metcalfe, Ronald Coase, Ronald Reagan, school choice, Schrödinger's Cat, selection bias, Shai Danziger, side project, Silicon Valley, Silicon Valley startup, speech recognition, statistical model, Steve Jobs, Steve Wozniak, Steven Pinker, survivorship bias, The Present Situation in Quantum Mechanics, the scientific method, The Wisdom of Crowds, Thomas Kuhn: the structure of scientific revolutions, transaction costs, uber lyft, ultimatum game, uranium enrichment, urban planning, Vilfredo Pareto, wikimedia commons

In fact, they would occur with less than a 5 percent chance—the false positive rate initially set by the developers. The final measure commonly used to declare whether a result is statistically significant is called the p-value, which is formally defined as the probability of obtaining a result equal to or more extreme than what was observed, assuming the null hypothesis was true. Essentially, if the p-value is smaller than the selected false positive rate (5 percent), then you would say that the result is statistically significant. P-values are commonly used in study reports to communicate such significance. For example, a p-value of 0.01 would mean that a difference equal to or larger than the one observed would happen only 1 percent of the time if the app had no effect. This value corresponds to a value on the figure in the extreme tail of the left bell curve and close to the middle of the right bell curve.

That’s down from 80 percent originally and means that two-thirds of the time they’d get a false negative, failing to detect the 15 percent difference. As a result, ideally any experiment should be designed to detect the smallest meaningful difference. One final note on p-values and statistical significance: Most statisticians caution against overreliance on p-values in interpreting the results of a study. Failing to find a significant result (a sufficiently small p-value) is not the same as having confidence that there is no effect. The absence of evidence is not the evidence of absence. Similarly, even though the study may have achieved a low p-value, it might not be a replicable result, which we will explore in the final section. Statistical significance should not be confused with scientific, human, or economic significance. Even the most minuscule effects can be detected as statistically significant if the sample size is large enough.

For the app study, while the customers want to know that they have better chances of falling asleep with the app than without, they also want to know how much better. The developers might even want to increase the sample size in order to be able to guarantee a certain margin of error in their estimates. Further, the American Statistical Association stressed in The American Statistician in 2016 that “scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold.” Focusing too much on the p-value encourages black-and-white thinking and compresses the wealth of information that comes out of a study into just one number. Such a singular focus can make you overlook possible suboptimal choices in a study’s design (e.g., sample size) or biases that could have crept in (e.g., selection bias). WILL IT REPLICATE? By now you should know that some experimental results are just flukes.


Scikit-Learn Cookbook by Trent Hauck

bioinformatics, computer vision, information retrieval, p-value

First, we need to import the feature_selection module: >>> from sklearn import feature_selection >>> f, p = feature_selection.f_regression(X, y) Here, f is the f score associated with each linear model fit with just one of the features. We can then compare these features and based on this comparison, we can cull features. p is also the p value associated with that f value. In statistics, the p value is the probability of a value more extreme than the current value of the test statistic. Here, the f value is the test statistic: >>> f[:5] array([ 1.06271357e-03, 2.91136869e+00, 1.01886922e+00, 2.22483130e+00, 4.67624756e-01]) >>> p[:5] array([ 0.97400066, 0.08826831, 0.31303204, 0.1361235, 0.49424067]) As we can see, many of the p values are quite large. We would rather want that the p values be quite small. So, we can grab NumPy out of our tool box and choose all the p values less than .05. These will be the features we'll use for the analysis: >>> >>> >>> >>> import numpy as np idx = np.arange(0, X.shape[1]) features_to_keep = idx[p < .05] len(features_to_keep) 501 As you can see, we're actually keeping a relatively large amount of features.

Let's look at a smaller problem and visualize how feature selection will eliminate certain features. We'll use the same scoring function from the first example, but just 20 features: >>> X, y = datasets.make_regression(10000, 20) >>> f, p = feature_selection.f_regression(X, y) Now let's plot the p values of the features, we can see which feature will be eliminated and which will be kept: >>> from matplotlib import pyplot as plt >>> f, ax = plt.subplots(figsize=(7, 5)) >>> ax.bar(np.arange(20), p, color='k') >>> ax.set_title("Feature p values") 186 www.it-ebooks.info Chapter 5 The output will be as follows: As we can see, many of the features won't be kept, but several will be. Feature selection on L1 norms We're going to work with some ideas similar to those we saw in the recipe on Lasso Regression. In that recipe, we looked at the number of features that had zero coefficients.

These will be the features we'll use for the analysis: >>> >>> >>> >>> import numpy as np idx = np.arange(0, X.shape[1]) features_to_keep = idx[p < .05] len(features_to_keep) 501 As you can see, we're actually keeping a relatively large amount of features. Depending on the context of the model, we can tighten this p value. This will lessen the number of features kept. Another option is using the VarianceThreshold object. We've learned a bit about it, but it's important to understand that our ability to fit models is largely based on the variance created by features. If there is no variance, then our features cannot describe the variation in the dependent variable. A nice feature of this, as per the documentation, is that because it does not use the outcome variable, it can be used for unsupervised cases. 185 www.it-ebooks.info Postmodel Workflow We will need to set the threshold for which we eliminate features.


pages: 561 words: 120,899

The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant From Two Centuries of Controversy by Sharon Bertsch McGrayne

Bayesian statistics, bioinformatics, British Empire, Claude Shannon: information theory, Daniel Kahneman / Amos Tversky, double helix, Edmond Halley, Fellow of the Royal Society, full text search, Henri Poincaré, Isaac Newton, Johannes Kepler, John Markoff, John Nash: game theory, John von Neumann, linear programming, longitudinal study, meta analysis, meta-analysis, Nate Silver, p-value, Pierre-Simon Laplace, placebo effect, prediction markets, RAND corporation, recommendation engine, Renaissance Technologies, Richard Feynman, Richard Feynman: Challenger O-ring, Robert Mercer, Ronald Reagan, speech recognition, statistical model, stochastic process, Thomas Bayes, Thomas Kuhn: the structure of scientific revolutions, traveling salesman, Turing machine, Turing test, uranium enrichment, Yom Kippur War

Newton, as Jeffreys pointed out, derived his law of gravity 100 years before Laplace proved it by discovering Jupiter’s and Saturn’s 877-year cycle: “There has not been a single date in the history of the law of gravitation when a modern significance test would not have rejected all laws [about gravitation] and left us with no law.”50 Bayes, on the other hand, “makes it possible to modify a law that has stood criticism for centuries without the need to suppose that its originator and his followers were useless blunderers.”51 Jeffreys concluded that p-values fundamentally distorted science. Frequentists, he complained, “appear to regard observations as a basis for possibly rejecting hypotheses, but in no case for supporting them.”52 But odds are that at least some of the hypotheses Fisher rejected were worth investigating or were actually true. A frequentist who tests a precise hypothesis and obtains a p-value of .04, for example, can consider that significant evidence against the hypothesis. But Bayesians say that even with a .01 p-value (which many frequentists would see as extremely strong evidence against a hypothesis) the odds in its favor are still 1 to 9 or 10—“not earth-shaking,” says Jim Berger, a Bayesian theorist at Duke University. P-values still irritate Bayesians. Steven N. Goodman, a distinguished Bayesian biostatistician at Johns Hopkins Medical School, complained in 1999, “The p-value is almost nothing sensible you can think of.

As the statistician Dennis Lindley wrote, Jeffreys “would admit a probability for the existence of the greenhouse effect, whereas most [frequentist] statisticians would not and would confine their probabilities to the data on CO2, ozone, heights of the oceans, etc.”49 Jeffreys was particularly annoyed by Fisher’s measures of uncertainty, his “p-values” and significance levels. The p-value was a probability statement about data, given the hypothesis under consideration. Fisher had developed them for dealing with masses of agricultural data; he needed some way to determine which should be trashed, which filed away, and which followed up on immediately. Comparing two hypotheses, he could reject the chaff and save the wheat. Technically, p-values let laboratory workers state that their experimental outcome offered statistically significant evidence against a hypothesis if the outcome (or a more extreme outcome) had only a small probability (under the hypothesis) of having occurred by chance alone.

Jahn reported that the random event generator produced 18,471 more examples (0.018%) of human influence on his sensitive microelectronic equipment than could be expected with chance alone. Even with a p-value as small as 0.00015, the frequentist would reject the hypothesis (and conclude in favor of psychokinetic powers) while the same evidence convinces a Bayesian that the hypothesis against spiritualism is almost certainly true. Six years later, Jimmie Savage, Harold Lindman, and Ward Edwards at the University of Michigan showed that results using Bayes and the frequentist’s p-values could differ by significant amounts even with everyday-sized data samples; for instance, a Bayesian with any sensible prior and a sample of only 20 would get an answer ten times or more larger than the p-value. Lindley ran afoul of Fisher’s temper when he reviewed Fisher’s third book and found “what I thought was a very basic, serious error in it: Namely, that [Fisher’s] fiducial probability doesn’t obey the rules of probability.


pages: 523 words: 112,185

Doing Data Science: Straight Talk From the Frontline by Cathy O'Neil, Rachel Schutt

Amazon Mechanical Turk, augmented reality, Augustin-Louis Cauchy, barriers to entry, Bayesian statistics, bioinformatics, computer vision, correlation does not imply causation, crowdsourcing, distributed generation, Edward Snowden, Emanuel Derman, fault tolerance, Filter Bubble, finite state, Firefox, game design, Google Glasses, index card, information retrieval, iterative process, John Harrison: Longitude, Khan Academy, Kickstarter, Mars Rover, Nate Silver, natural language processing, Netflix Prize, p-value, pattern recognition, performance metric, personalized medicine, pull request, recommendation engine, rent-seeking, selection bias, Silicon Valley, speech recognition, statistical model, stochastic process, text mining, the scientific method, The Wisdom of Crowds, Watson beat the top human players on Jeopardy!, X Prize

This can be interpreted as the proportion of variance explained by our model. Note that mean squared error is in there getting divided by total error, which is the proportion of variance unexplained by our model and we calculate 1 minus that. p-values Looking at the output, the estimated s are in the column marked Estimate. To see the p-values, look at . We can interpret the values in this column as follows: We are making a null hypothesis that the s are zero. For any given , the p-value captures the probability of observing the data that we observed, and obtaining the test-statistic that we obtained under the null hypothesis. This means that if we have a low p-value, it is highly unlikely to observe such a test-statistic under the null hypothesis, and the coefficient is highly likely to be nonzero and therefore significant. Cross-validation Another approach to evaluating the model is as follows.

Different selection criterion might produce wildly different models, and it’s part of your job to decide what to optimize for and why: R-squared Given by the formula , it can be interpreted as the proportion of variance explained by your model. p-values In the context of regression where you’re trying to estimate coefficients (the s), to think in terms of p-values, you make an assumption of there being a null hypothesis that the s are zero. For any given , the p-value captures the probability of observing the data that you observed, and obtaining the test-statistic (in this case the estimated ) that you got under the null hypothesis. Specifically, if you have a low p-value, it is highly unlikely that you would observe such a test-statistic if the null hypothesis actually held. This translates to meaning that (with some confidence) the coefficient is highly likely to be non-zero.

You have a couple values in the output of the R function that help you get at the issue of how confident you can be in the estimates: p-values and R-squared. Going back to our model in R, if we type in summary(model), which is the name we gave to this model, the output would be: summary (model) Call: lm(formula = y ~ x) Residuals: Min 1Q Median 3Q Max -121.17 -52.63 -9.72 41.54 356.27 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -32.083 16.623 -1.93 0.0565 . x 45.918 2.141 21.45 <2e-16 *** Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 77.47 on 98 degrees of freedom Multiple R-squared: 0.8244, Adjusted R-squared: 0.8226 F-statistic: 460 on 1 and 98 DF, p-value: < 2.2e-16 R-squared . This can be interpreted as the proportion of variance explained by our model. Note that mean squared error is in there getting divided by total error, which is the proportion of variance unexplained by our model and we calculate 1 minus that.


pages: 923 words: 163,556

Advanced Stochastic Models, Risk Assessment, and Portfolio Optimization: The Ideal Risk, Uncertainty, and Performance Measures by Frank J. Fabozzi

algorithmic trading, Benoit Mandelbrot, capital asset pricing model, collateralized debt obligation, correlation coefficient, distributed generation, diversified portfolio, fixed income, index fund, Louis Bachelier, Myron Scholes, p-value, quantitative trading / quantitative finance, random walk, risk-adjusted returns, short selling, stochastic volatility, Thomas Bayes, transaction costs, value at risk

., the parameter is in Θ0), by using the subscript θ0 with the probability measure. The p-Value Suppose we had drawn some sample x and computed the value t(x) of the statistic from it. It might be of interest to find out how significant this test result is or, in other words, at which significance level this value t(x) would still lead to decision d0 (i.e., no rejection of the null hypothesis), while any value greater than t(x) would result in its rejection (i.e., d1). This concept brings us to the next definition. p-value: Suppose we have a sample realization given by x = (x1, x2, …, xn). Furthermore, let δ(X) be any test with test statistic t(X) such that the test statistic evaluated at x, t(x), is the value of the acceptance region ΔA closest to the rejection region ΔC. The p-value determines the probability, under the null hypothesis, that in any trial X the test statistic t(X) assumes a value in the rejection region ΔC; that is,p = Pθ0 (t ( X ) ∈ ΔC ) = Pθ0 (δ ( X ) = d1 ) We can interpret the p-value as follows.

The p-value determines the probability, under the null hypothesis, that in any trial X the test statistic t(X) assumes a value in the rejection region ΔC; that is,p = Pθ0 (t ( X ) ∈ ΔC ) = Pθ0 (δ ( X ) = d1 ) We can interpret the p-value as follows. Suppose we obtained a sample outcome x such that the test statistics assumed the corresponding value t(x). Now, we want to know what is the probability, given that the null hypothesis holds, that the test statistic might become even more extreme than t(x). This probability is equal to the p-value. If t(x) is a value pretty close to the median of the distribution of t(X), then the chance of obtaining a more extreme value, which refutes the null hypothesis more strongly might be fairly feasible. Then, the p-value will be large. However, if, instead, the value t(x) is so extreme that the chances will be minimal under the null hypothesis that, in some other test run we obtain a value t(X) even more in favor of the alternative hypothesis, this will correspond to a very low p-value.

However, if, instead, the value t(x) is so extreme that the chances will be minimal under the null hypothesis that, in some other test run we obtain a value t(X) even more in favor of the alternative hypothesis, this will correspond to a very low p-value. If p is less than some given significance level α, we reject the null hypothesis and we say that the test result is significant. We demonstrate the meaning of the p-value in Figure 19.4. The horizontal axis provides the state space of possible values for the statistic t(X). The figure displays the probability, given that the null hypothesis holds, of this t(X) assuming a value greater than c, for each c of the state space, and in particular also at t(x) (i.e., the statistic evaluated at the observation x). We can see that, by definition, the value t(x) is the boundary between the acceptance region and the critical region, with t(x) itself belonging to the acceptance region.


pages: 321 words: 97,661

How to Read a Paper: The Basics of Evidence-Based Medicine by Trisha Greenhalgh

call centre, complexity theory, conceptual framework, correlation coefficient, correlation does not imply causation, deskilling, knowledge worker, longitudinal study, meta analysis, meta-analysis, microbiome, New Journalism, p-value, personalized medicine, placebo effect, publication bias, randomized controlled trial, selection bias, the scientific method

In order to demonstrate that A has caused B (rather than B causing A, or A and B both being caused by C), you need more than a correlation coefficient. Box 5.1 gives some criteria, originally developed by Sir Austin Bradford Hill [14], which should be met before assuming causality. Probability and confidence Have ‘p-values’ been calculated and interpreted appropriately? One of the first values a student of statistics learns to calculate is the p-value—that is the probability that any particular outcome would have arisen by chance. Standard scientific practice, which is essentially arbitrary, usually deems a p-value of less than one in twenty (expressed as p < 0.05, and equivalent to a betting odds of twenty to one) as ‘statistically significant’, and a p-value of less than one in a hundred (p < 0.01) as ‘statistically highly significant’. By definition, then, one chance association in twenty (this must be around one major published result per journal issue) will appear to be significant when it isn't, and one in a hundred will appear highly significant when it is really what my children call a ‘fluke’.

A result in the statistically significant range (p < 0.05 or p < 0.01 depending on what you have chosen as the cutoff) suggests that the authors should reject the null hypothesis (i.e. the hypothesis that there is no real difference between two groups). But as I have argued earlier (see section ‘Were preliminary statistical questions addressed?’), a p-value in the non-significant range tells you that either there is no difference between the groups or there were too few participants to demonstrate such a difference if it existed. It does not tell you which. The p-value has a further limitation. Guyatt and colleagues conclude thus, in the first article of their ‘Basic Statistics for Clinicians’ series on hypothesis testing using p-values. Why use a single cut-off point [for statistical significance] when the choice of such a point is arbitrary? Why make the question of whether a treatment is effective a dichotomy (a yes-no decision) when it would be more appropriate to view it as a continuum?

If they are not, a paired t or other paired test should be used instead. 3. Only a single pair of measurements should be made on each participant, as the measurements made on successive participants need to be statistically independent of each other if we are to end up with unbiased estimates of the population parameters of interest. 4. Every r-value should be accompanied by a p-value, which expresses how likely an association of this strength would be to have arisen by chance (see section ‘Have ‘p-values’ been calculated and interpreted appropriately?’), or a confidence interval, which expresses the range within which the ‘true’ R-value is likely to lie (see section ‘Have confidence intervals been calculated, and do the authors' conclusions reflect them?’). (Note that lower case ‘r’ represents the correlation coefficient of the sample, whereas upper case ‘R’ represents the correlation coefficient of the entire population.)


pages: 339 words: 112,979

Unweaving the Rainbow by Richard Dawkins

Any sufficiently advanced technology is indistinguishable from magic, Arthur Eddington, complexity theory, correlation coefficient, David Attenborough, discovery of DNA, double helix, Douglas Engelbart, Douglas Engelbart, I think there is a world market for maybe five computers, Isaac Newton, Jaron Lanier, Mahatma Gandhi, music of the spheres, Necker cube, p-value, phenotype, Ralph Waldo Emerson, Richard Feynman, Ronald Reagan, Solar eclipse in 1919, Steven Pinker, Zipf's Law

When we say that an effect is statistically significant, we must always specify a so-called p-value. This is the probability that a purely random process would have generated a result at least as impressive as the actual result. A p-value of 2 in 10,000 is pretty impressive, but it is still possible that there is no genuine pattern there. The beauty of doing a proper statistical test is that we know how probable it is that there is no genuine pattern there. Conventionally, scientists allow themselves to be swayed by p-values of 1 in 100, or even as high as 1 in 20: fair less impressive than 2 in 10,000. What p-value you accept depends upon how important the result is, and upon what decisions might follow from it. If all you are trying to decide is whether it is worth repeating the experiment with a larger sample, a p-value of 0.05, or 1 in 20, is quite acceptable.

Even though there is a 1 in 20 chance that your interesting result would have happened anyway by chance, not much is at stake: the error is not a costly one. If the decision is a life and death matter, as in some medical research, a much lower p-value than 1 in 20 should be sought. The same is true of experiments that purport to show highly controversial results, such as telepathy or 'paranormal' effects. As we briefly saw in connection with DNA fingerprinting, statisticians distinguish false positive from false negative errors, sometimes called type 1 and type 2 errors respectively. A type 2 error, or false negative, is a failure to detect an effect when there really is one. A type 1 error, or false positive, is the opposite: concluding that there really is something going on when actually there is nothing but randomness. The p-value is the measure of the probability that you have made a type 1 error. Statistical judgement means steering a middle course between the two kinds of error.

Birds may be programmed to learn to adjust their policy as a result of their statistical experience. Whether they learn or not, successfully hunting animals must usually behave as if they are good statisticians. (I hope it is not necessary, by the way, to plod through the usual disclaimer: No, no, the birds aren't consciously working it out with calculator and probability tables. They are behaving as if they were calculating p-values. They are no more aware of what a p-value means than you are aware of the equation for a parabolic trajectory when you catch a cricket ball or baseball in the outfield.) Angler fish take advantage of the gullibility of little fish such as gobies. But that is an unfairly value-laden way of putting it. It would be better not to speak of gullibility and say that they exploit the inevitable difficulty the little fish have in steering between type 1 and type 2 errors.


pages: 571 words: 105,054

Advances in Financial Machine Learning by Marcos Lopez de Prado

algorithmic trading, Amazon Web Services, asset allocation, backtesting, bioinformatics, Brownian motion, business process, Claude Shannon: information theory, cloud computing, complexity theory, correlation coefficient, correlation does not imply causation, diversification, diversified portfolio, en.wikipedia.org, fixed income, Flash crash, G4S, implied volatility, information asymmetry, latency arbitrage, margin call, market fragmentation, market microstructure, martingale, NP-complete, P = NP, p-value, paper trading, pattern recognition, performance metric, profit maximization, quantitative trading / quantitative finance, RAND corporation, random walk, risk-adjusted returns, risk/return, selection bias, Sharpe ratio, short selling, Silicon Valley, smart cities, smart meter, statistical arbitrage, statistical model, stochastic process, survivorship bias, transaction costs, traveling salesman

Third, determine the minimum d such that the p-value of the ADF statistic on FFD(d) falls below 5%. Fourth, use the FFD(d) series as your predictive feature. Exercises Generate a time series from an IID Gaussian random process. This is a memory-less, stationary series: Compute the ADF statistic on this series. What is the p-value? Compute the cumulative sum of the observations. This is a non-stationary series without memory. What is the order of integration of this cumulative series? Compute the ADF statistic on this series. What is the p-value? Differentiate the series twice. What is the p-value of this over-differentiated series? Generate a time series that follows a sinusoidal function. This is a stationary series with memory. Compute the ADF statistic on this series. What is the p-value? Shift every observation by the same positive value.

Shift every observation by the same positive value. Compute the cumulative sum of the observations. This is a non-stationary series with memory. Compute the ADF statistic on this series. What is the p-value? Apply an expanding window fracdiff, with τ = 1E − 2. For what minimum d value do you get a p-value below 5%? Apply FFD, with τ = 1E − 5. For what minimum d value do you get a p-value below 5%? Take the series from exercise 2.b: Fit the series to a sine function. What is the R-squared? Apply FFD(d = 1). Fit the series to a sine function. What is the R-squared? What value of d maximizes the R-squared of a sinusoidal fit on FFD(d). Why? Take the dollar bar series on E-mini S&P 500 futures. Using the code in Snippet 5.3, for some d ∈ [0, 2], compute fracDiff_FFD(fracDiff_FFD(series,d),-d).

When your MDI, MDA, or SFI analysis selects as most important (using label information) the same features that PCA chose as principal (ignoring label information), this constitutes confirmatory evidence that the pattern identified by the ML algorithm is not entirely overfit. If the features were entirely random, the PCA ranking would have no correspondance with the feature importance ranking. Figure 8.1 displays the scatter plot of eigenvalues associated with an eigenvector (x-axis) paired with MDI of the feature associated with an engenvector (y-axis). The Pearson correlation is 0.8491 (p-value below 1E-150), evidencing that PCA identified informative features and ranked them correctly without overfitting. Figure 8.1 Scatter plot of eigenvalues (x-axis) and MDI levels (y-axis) in log-log scale I find it useful to compute the weighted Kendall's tau between the feature importances and their associated eigenvalues (or equivalently, their inverse PCA rank). The closer this value is to 1, the stronger is the consistency between PCA ranking and feature importance ranking.


pages: 227 words: 62,177

Numbers Rule Your World: The Hidden Influence of Probability and Statistics on Everything You Do by Kaiser Fung

American Society of Civil Engineers: Report Card, Andrew Wiles, Bernie Madoff, Black Swan, business cycle, call centre, correlation does not imply causation, cross-subsidies, Daniel Kahneman / Amos Tversky, edge city, Emanuel Derman, facts on the ground, fixed income, Gary Taubes, John Snow's cholera map, moral hazard, p-value, pattern recognition, profit motive, Report Card for America’s Infrastructure, statistical model, the scientific method, traveling salesman

The minute probability he computed, one in a quindecillion, is technically known as the p-value and signifies how unlikely the situation was. The smaller the p-value, the more impossible the situation, and the greater its power to refute the no-fraud scenario. Then, statisticians say, the result has statistical significance. Note that this is a matter of magnitude, rather than direction. If the p-value were 20 percent, then there would be a one-in-five chance of seeing at least 200 insider wins in seven years despite absence of fraud, and then Rosenthal would not have sufficient evidence to overturn the fair-lottery hypothesis. Statisticians set a minimum acceptable standard of evidence, which is a p-value of 1 percent or 5 percent. This practice originated with Sir Ronald Fisher, one of the giants of statistical thinking. For a more formal treatment of p-values and statistical significance, look up the topics of hypothesis testing and confidence intervals in a statistics textbook.

. ~###~ In Minnesota, an ambitious experiment was organized to measure how turning off ramp meters on the highway entrances would affect the state of congestion. From the viewpoint of statistical testing, the doubters led by Senator Day wanted to know, if ramp metering was useless, what was the likelihood that the average trip time would rise by 22 percent (the improvement claimed by engineers who run the program) after the meters were shut off? Because this likelihood, or p-value, was small, the consultants who analyzed the experiment concluded that the favorite tool of the traffic engineers was indeed effective at reducing congestion. Since statisticians do not believe in miracles, they avoided the alternative path, which would assert that a rare event—rather than the shutting off of ramp meters—could have produced the deterioration in travel time during the experiment.

See also False negatives; False positives confessions elicited by, 118, 120–21, 125–27, 130 countermeasures, 114, 122 examiner characteristics and role, 113–14 the legal system on, 117–18 major problems with, 129–30 in national-security screening, 96–97, 118, 121–24, 127–30, 175–76 PCASS, 118, 121–24, 127–30, 131, 132, 175 popularity of, 116–18 screening vs. targeted investigation, 123–24 Pre–post analysis, 158–59 Predictably Irrational (Ariely), 158 Prediction of rare events, 124 PulseNet, 31, 41 P-value, 179, 180 Quetelet, Adolphe, 2–3, 4 Queuing theory, 157–58 Quindecillion, 137, 144, 177 Racial/minority groups credit scores and, 52, 54 test fairness and, 64, 65, 70, 72–82, 94, 168–70, 180 Ramp metering, 13–15, 16, 19, 20–24, 157, 158–59, 180–81 Randomization, 170 Rauch, Ernst, 87 Red State, Blue State, Rich State, Poor State (Gelman), 168 Reliability, 10, 12, 14, 19 Riddick, Steve, 105 Riis, Bjarne, 103, 105, 110 Risk Management Solutions, 87 Risk pools, 86–87, 89–94, 168, 171 Rodriguez, Alex, 114 Rodriguez, Ivan, 114 Rolfs, Robert, 36 Rooney, J.


pages: 322 words: 107,576

Bad Science by Ben Goldacre

Asperger Syndrome, correlation does not imply causation, experimental subject, hygiene hypothesis, Ignaz Semmelweis: hand washing, John Snow's cholera map, Louis Pasteur, meta analysis, meta-analysis, Nelson Mandela, offshore financial centre, p-value, placebo effect, publication bias, Richard Feynman, risk tolerance, Ronald Reagan, selection bias, selective serotonin reuptake inhibitor (SSRI), the scientific method, urban planning

In generating his obligatory, spurious, Meadowesque figure—which this time was ‘one in 342 million’—the prosecution’s statistician made a simple, rudimentary mathematical error. He combined individual statistical tests by multiplying p-values, the mathematical description of chance, or statistical significance. This bit’s for the hardcore science nerds, and will be edited out by the publisher, but I intend to write it anyway: you do not just multiply p-values together, you weave them with a clever tool, like maybe ‘Fisher’s method for combination of independent p-values’. If you multiply p-values together, then harmless and probable incidents rapidly appear vanishingly unlikely. Let’s say you worked in twenty hospitals, each with a harmless incident pattern: say p=0.5. If you multiply those harmless p-values, of entirely chance findings, you end up with a final p-value of 0.5 to the power of twenty, which is p < 0.000001, which is extremely, very, highly statistically significant.

Presented with a small increase like this, you have to think: is it statistically significant? I did the maths, and the answer is yes, it is, in that you get a p-value of less than 0.05. What does ‘statistically significant’ mean? It’s just a way of expressing the likelihood that the result you got was attributable merely to chance. Sometimes you might throw ‘heads’ five times in a row, with a completely normal coin, especially if you kept tossing it for long enough. Imagine a jar of 980 blue marbles, and twenty red ones, all mixed up: every now and then—albeit rarely—picking blindfolded, you might pull out three red ones in a row, just by chance. The standard cut-off point for statistical significance is a p-value of 0.05, which is just another way of saying, ‘If I did this experiment a hundred times, I’d expect a spurious positive result on five occasions, just by chance.’

Will our increase in cocaine use, already down from ‘doubled’ to ‘35.7 per cent’, even survive? No. Because there is a final problem with this data: there is so much of it to choose from. There are dozens of data points in the report: on solvents, cigarettes, ketamine, cannabis, and so on. It is standard practice in research that we only accept a finding as significant if it has a p-value of 0.05 or less. But as we said, a p-value of 0.05 means that for every hundred comparisons you do, five will be positive by chance alone. From this report you could have done dozens of comparisons, and some of them would indeed have shown increases in usage—but by chance alone, and the cocaine figure could be one of those. If you roll a pair of dice often enough, you will get a double six three times in a row on many occasions.


Learn Algorithmic Trading by Sebastien Donadio

active measures, algorithmic trading, automated trading system, backtesting, Bayesian statistics, buy and hold, buy low sell high, cryptocurrency, DevOps, en.wikipedia.org, fixed income, Flash crash, Guido van Rossum, latency arbitrage, locking in a profit, market fundamentalism, market microstructure, martingale, natural language processing, p-value, paper trading, performance metric, prediction markets, quantitative trading / quantitative finance, random walk, risk tolerance, risk-adjusted returns, Sharpe ratio, short selling, sorting algorithm, statistical arbitrage, statistical model, stochastic process, survivorship bias, transaction costs, type inference, WebSocket, zero-sum game

Heatmap will use the list of symbols on the x and y axes. The last argument will mask the p-values higher than 0.98: seaborn.heatmap(pvalues, xticklabels=symbolsIds, yticklabels=symbolsIds, cmap='RdYlGn_r', mask = (pvalues >= 0.98)) This code will return the following map as an output. This map shows the p-values of the return of the coin: If a p-value is lower than 0.02, this means the null hypothesis is rejected. This means that the two series of prices corresponding to two different symbols can be co-integrated. This means that the two symbols will keep the same spread on average. On the heatmap, we observe that the following symbols have p-values lower than 0.02: This screenshot represents the heatmap measuring the cointegration between a pair of symbols. If it is red, this means that the p-value is 1, which means that the null hypothesis is not rejected.

If we fail to reject the null hypothesis, we can say that the time series is non-stationary: def test_stationarity(timeseries): print('Results of Dickey-Fuller Test:') dftest = adfuller(timeseries[1:], autolag='AIC') dfoutput = pd.Series(dftest[0:4], index=['Test Statistic', 'p-value', '#Lags Used', 'Number of Observations Used']) print (dfoutput) test_stationarity(goog_data['Adj Close']) This test returns a p-value of 0.99. Therefore, the time series is not stationary. Let's have a look at the test: test_stationarity(goog_monthly_return[1:]) This test returns a p-value of less than 0.05. Therefore, we cannot say that the time series is not stationary. We recommend using daily returns when studying financial products. In the example of stationary, we could observe that no transformation is needed. The last step of the time series analysis is to forecast the time series.


The Ethical Algorithm: The Science of Socially Aware Algorithm Design by Michael Kearns, Aaron Roth

23andMe, affirmative action, algorithmic trading, Alvin Roth, Bayesian statistics, bitcoin, cloud computing, computer vision, crowdsourcing, Edward Snowden, Elon Musk, Filter Bubble, general-purpose programming language, Google Chrome, ImageNet competition, Lyft, medical residency, Nash equilibrium, Netflix Prize, p-value, Pareto efficiency, performance metric, personalized medicine, pre–internet, profit motive, quantitative trading / quantitative finance, RAND corporation, recommendation engine, replication crisis, ride hailing / ride sharing, Robert Bork, Ronald Coase, self-driving car, short selling, sorting algorithm, speech recognition, statistical model, Stephen Hawking, superintelligent machines, telemarketer, Turing machine, two-sided market, Vilfredo Pareto

Your null hypothesis is that this fellow is no better at predicting stock movements than the flip of a coin—on any particular day, the probability that he correctly guesses the directional movement of LYFT is 50 percent. You go on to compute the p-value corresponding to your null hypothesis—the probability that if the null hypothesis were true, you would have observed something as extreme as you did: ten correct predictions in a row. Well, if the sender had only a 50 percent chance of getting the answer right on any given day, then the chance that he would get it right ten days in a row—the p-value—would be only about .0009, the probability of flipping a coin ten times in a row and getting heads each time. This is very small—well below the .05 threshold for p-values that is often taken as the standard for statistical significance in the scientific literature. So, marshaling all of your scientific training, you decide to forget your skepticism and reject the null hypothesis: you decide your interlocutor is actually pretty good at predicting stock movements.

The leaves of the tree represent the targets of the scam. The black leaves have already received incorrect predictions. The white leaves have received perfect predictions so far. And these pitfalls are not just limited to email scams, hedge funds, and other for-profit ventures. In fact, as we shall see in this chapter, the problems pervade much of modern scientific research as well. Power Poses, Priming, and Pinot Noir If p-values and hedge funds are foreign to you, you have probably at least received an email forwarded from a gullible friend, or seen a post on your social media feeds, proclaiming the newest scientific finding that will change your life forever. Do you want to live longer? Drink more red wine (or maybe less). Eat more chocolate (or maybe less). Seek out pomegranates, green tea, quinoa, açai berries, or the latest superfood.

This is the adaptivity part. Repeatedly performing the same experiment, or repeatedly running different statistical tests on the same dataset, but then only reporting the most interesting results is known as p-hacking. It is a technique that scientists can use (deliberately or unconsciously) to try to get their results to appear more significant (remember from the beginning of the chapter that p-values are a commonly used measure of statistical significance). It isn’t a statistically valid practice, but it is incentivized by the structure of modern scientific publishing. This is because not all scientific journals are created equal: like most other things in life, some are viewed as conferring a higher degree of status than others, and researchers want to publish in these better journals. Because they are more prestigious, papers in these journals will reflect better on the researcher when it comes time to get a job or to be promoted.


pages: 62 words: 14,996

SciPy and NumPy by Eli Bressert

Debian, Guido van Rossum, p-value

import numpy as np from scipy import stats # Generating a normal distribution sample # with 100 elements sample = np.random.randn(100) # normaltest tests the null hypothesis. out = stats.normaltest(sample) print('normaltest output') print('Z-score = ' + str(out[0])) print('P-value = ' + str(out[1])) # kstest is the Kolmogorov-Smirnov test for goodness of fit. # Here its sample is being tested against the normal distribution. # D is the KS statistic and the closer it is to 0 the better. out = stats.kstest(sample, 'norm') print('\nkstest output for the Normal distribution') print('D = ' + str(out[0])) print('P-value = ' + str(out[1])) # Similarly, this can be easily tested against other distributions, # like the Wald distribution. out = stats.kstest(sample, 'wald') print('\nkstest output for the Wald distribution') print('D = ' + str(out[0])) print('P-value = ' + str(out[1])) Researchers commonly use descriptive functions for statistics. Some descriptive functions that are available in the stats package include the geometric mean (gmean), the skewness of a sample (skew), and the frequency of values in a sample (itemfreq).


pages: 119 words: 10,356

Topics in Market Microstructure by Ilija I. Zovko

Brownian motion, computerized trading, continuous double auction, correlation coefficient, financial intermediation, Gini coefficient, information asymmetry, market design, market friction, market microstructure, Murray Gell-Mann, p-value, quantitative trading / quantitative finance, random walk, stochastic process, stochastic volatility, transaction costs

Significant slope coefficients show that if two institutions’ strategies were correlated in one month, they are likely to be correlated in the next one as well. The table does not contain the off-book market because we cannot reconstruct institution codes for the off-book market in the same way as we can for the on-book market. The ± values are the standard error of the coefficient estimate and the values in the parenthesis are the standard p-values. On-book market Stock Intercept Slope R2 AAL -0.010 ± 0.004 (0.02) 0.25 ± 0.04 (0.00) 0.061 AZN -0.01 ± 0.003 (0.00) 0.14 ± 0.03 (0.00) 0.019 LLOY 0.003 ± 0.003 (0.28) 0.23 ± 0.02 (0.00) 0.053 VOD 0.008 ± 0.001 (0.00) 0.17 ± 0.01 (0.00) 0.029 does not work for institutions that do not trade frequently5 . Therefore, the results reported in this section concern only the on-book market and are based mostly on more active institutions.

Some of the explanatory variables, such as signed trades and signed volume, are strongly correlated. This may lead to instabilities in coefficient estimates for those variables and we need to keep this in mind when interpreting results. The results for the on- and off-book markets, as well as for the daily and hourly returns are collected in table II. Apart from the value of the coefficient, its error and p-value, we list also Rs2 and Rp2 . Rs2 is the value of R-square of a regression with only the selected variable, and no others, included. It is equal to the square root of the absolute value of the correlation between the variable and the 86 Rs2 Rp2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 R2 = 0.00 off-book Error 0.006 (0.0) 0.008 (0.0) 0.009 (0.0) 0.008 (0.0) Coef. -0.075 0.019 -0.031 0.033 on-book market Error Rs2 Rp2 0.05 (0.0) 0.24 0.06 0.04 (0.0) 0.04 0.00 0.04 (0.0) 0.19 0.01 0.05 (0.0) 0.06 0.01 R2 = 0.28 Coef. 3.03 0.54 -1.23 1.46 Hourly δV (signed volume) δE (entropy) δN (no. firms) δT (no. signed trades) Overall Daily δV (signed volume) δE (entropy) δN (no. firms) δT (no. signed trades) Overall Coef. 0.40 0.20 -0.17 0.00 on-book market Error Rs2 Rp2 0.01 (0.0) 0.21 0.10 0.01 (0.0) 0.12 0.03 0.01 (0.0) 0.15 0.02 0.01 (0.7) 0.07 0.00 R2 = 0.32 Coef. -0.104 0.039 -0.050 0.008 off-book Error 0.007 (0.0) 0.008 (0.0) 0.009 (0.0) 0.008 (0.3) Rs2 Rp2 0.05 0.02 0.03 0.00 0.03 0.00 0.02 0.00 R2 = 0.07 CHAPTER 5.

. -0.104 0.039 -0.050 0.008 off-book Error 0.007 (0.0) 0.008 (0.0) 0.009 (0.0) 0.008 (0.3) Rs2 Rp2 0.05 0.02 0.03 0.00 0.03 0.00 0.02 0.00 R2 = 0.07 CHAPTER 5. MARKET IMBALANCES AND STOCK RET.: HETEROGENEITY OF ORDER SIZES AT THE LSE Table 5.2: Regression results showing the significance of the market imbalance variables on price returns. Columns from left to right are estimated coefficient, its error and in the parenthesis the p-value of the test that the coefficient is zero assuming normal statistics; Rs2 is the value of R2 in a regression where only the selected variable is present in the regression. It expresses how much the variable on its own (solo) explains price returns. Final column Rp2 is the partial R2 of the selected variable. It expresses how much the variable explains price returns above the other three variables.


The Intelligent Asset Allocator: How to Build Your Portfolio to Maximize Returns and Minimize Risk by William J. Bernstein

asset allocation, backtesting, buy and hold, capital asset pricing model, commoditize, computer age, correlation coefficient, diversification, diversified portfolio, Eugene Fama: efficient market hypothesis, fixed income, index arbitrage, index fund, intangible asset, Long Term Capital Management, p-value, passive investing, prediction markets, random walk, Richard Thaler, risk tolerance, risk-adjusted returns, risk/return, South Sea Bubble, stocks for the long run, survivorship bias, the rule of 72, the scientific method, time value of money, transaction costs, Vanguard fund, Yogi Berra, zero-coupon bond

In which case he is probably not skilled, since it would not be unusual for 1 of 30 individuals to experience a 1.1% random event.On the other hand, if his performance measured is out of sample—that is, we had picked him alone among his teammates—then he probably is skilled, since we would have only one chance at a 1.1% occurrence in a random batting world. An only slightly more complex formulation is used to evaluate money managers. One has to be extremely careful to distinguish out-of-sample from in-sample performance. One should not be surprised if one picks out the best-performing manager out of 500 and finds that his p value is .001. However, if one identifies him ahead of time, and then his performance p value is .001 after the fact, then he probably is skilled. 87 88 The Intelligent Asset Allocator Table 6-1. Subsequent Performance of Top Performing Funds, 1970–1998 Top 30 funds 1970–1974 All funds S&P 500 Top 30 funds 1975–1979 All funds S&P 500 Top 30 funds 1980–1984 All funds S&P 500 Top 30 funds 1985–1989 All funds S&P 500 Top 30 funds 1990–1994 All funds S&P 500 SOURCE: Return 1970–1974 Return 1975–1998 0.78% ⫺6.12% ⫺2.35% 16.05% 16.38% 17.04% Return 1975–1979 Return 1980–1998 35.70% 20.44% 14.76% 15.78% 15.28% 17.67% Return 1980–1984 Return 1985–1998 22.51% 14.83% 14.76% 16.01% 15.59% 18.76% Return 1985–1989 Return 1990–1998 22.08% 16.40% 20.41% 16.24% 15.28% 17.81% Return 1990–1994 Return 1995–1998 18.94% 9.39% 8.69% 21.28% 24.60% 32.18% DFA/Micropal/Standard and Poor’s.

In other words, in a random world an annual 0.020/兹10 SD of 20 points translates into an SD of 6.3 points over 10 years. The difference between the batter’s performance and the mean is .020, and dividing that by the SE of .0063 gives a “z value” of 3.17. Since we are considering 10 years, performance, there are 9 “degrees of freedom.” The z value and degrees of freedom are fed into a “t distribution function” on our spreadsheet, and out pops a p value of .011. In other words, in a “random batting” world, there is a 1.1% chance of a given batter averaging .280 over 10 seasons. Whether or not we consider such a batter skilled also depends on whether we are observing him “in sample” or “out of sample.” In sample means that we picked him out of a large number of batters—say, all of his teammates—after the fact. In which case he is probably not skilled, since it would not be unusual for 1 of 30 individuals to experience a 1.1% random event.On the other hand, if his performance measured is out of sample—that is, we had picked him alone among his teammates—then he probably is skilled, since we would have only one chance at a 1.1% occurrence in a random batting world.

The alpha is the difference between the fund’s performance and that of the regression-determined benchmark and a measure of how well the manager has performed. It is expressed the same way as return, in percent per year, and can be positive or negative. For example, if a manager has an alpha of ⫺4% per year this means that the manager has underperformed the regression-determined benchmark by 4% 90 The Intelligent Asset Allocator annually. Oakmark’s alpha for the first 29 months is truly spectacular, and quite statistically significant, with a p value of .0004. This means that there was less than a 1-in-2000 possibility that the fund’s superb performance in the first 29 months could have been due to chance. Unfortunately, its performance in the last 29-month period was equally impressive, but in the wrong direction. My interpretation of the above data is that Mr. Sanborn is modestly skilled. “Modestly skilled” is not at all derogatory in this context, since 99% of fund managers demonstrate no evidence of skill whatsoever.


pages: 836 words: 158,284

The 4-Hour Body: An Uncommon Guide to Rapid Fat-Loss, Incredible Sex, and Becoming Superhuman by Timothy Ferriss

23andMe, airport security, Albert Einstein, Black Swan, Buckminster Fuller, carbon footprint, cognitive dissonance, Columbine, correlation does not imply causation, Dean Kamen, game design, Gary Taubes, index card, Kevin Kelly, knowledge economy, life extension, lifelogging, Mahatma Gandhi, microbiome, p-value, Parkinson's law, Paul Buchheit, placebo effect, Productivity paradox, publish or perish, Ralph Waldo Emerson, Ray Kurzweil, Richard Feynman, selective serotonin reuptake inhibitor (SSRI), Silicon Valley, Silicon Valley startup, Skype, stem cell, Steve Jobs, survivorship bias, Thorstein Veblen, Vilfredo Pareto, wage slave, William of Occam

Let the journals catch up later—you don’t have to wait. P-Value: One Number to Understand Statistical thinking will one day be as necessary for effective citizenship as the ability to read and write. —H. G. Wells, who created national hysteria with his radio adaptation of his science fiction book The War of the Worlds British MD and quack buster Ben Goldacre, contributor of the next chapter, is well known for illustrating how people can be fooled by randomness. He uses the following example: If you go to a cocktail party, what’s the likelihood that two people in a group of 23 will share the same birthday? One in 100? One in 50? In fact, it’s one in two. Fifty percent. To become better at spotting randomness for what it is, it’s important to understand the concept of “p-value,” which you’ll see in all good research studies.

To become better at spotting randomness for what it is, it’s important to understand the concept of “p-value,” which you’ll see in all good research studies. It answers the question: how confident are we that this result wasn’t due to random chance? To demonstrate (or imply) cause-and-effect, the gold standard for studies is a p-value of less than 0.05 (p < 0.05), which means a less than 5% likelihood that the result can be attributed to chance. A p-value of less than 0.05 is also what most scientists mean when they say something is “statistically significant.” An example makes this easy to understand. Let’s say you are a professional coin flipper, but you’re unethical. In hopes of dominating the coin-flipping gambling circuit, you’ve engineered a quarter that should come up heads more often than a normal quarter. To test it, you flip it and a normal quarter 100 times, and the results seem clear: the “normal” quarter came up heads 50 times, and your designer quarter came up heads 60 times!

In other words, you better make sure that 20% holds up with at least 453 flips with each coin. In this case, 10 extra flips out of 100 doesn’t prove cause-and- effect at all. Three points to remember about p-values and “statistical significance”: • Just because something seems miraculous doesn’t mean it is. People are fooled by randomness all the time, as in the birthday example. • The larger the difference between groups, the smaller the groups can be. Critics of small trials or self-experimentation often miss this. If something appears to produce a 300% change, you don’t need that many people to show significance, assuming you’re controlling variables. • It is not kosher to combine p-values from multiple experiments to make something more or less believable. That’s another trick of bad scientists and mistake of uninformed journalists. TOOLS AND TRICKS The Black Swan by Nassim Taleb (www.fourhourbody.com/blackswan) Taleb, also author of the bestseller Fooled by Randomness, is the reigning king when it comes to explaining how we fool ourselves and how we can limit the damage.


pages: 290 words: 82,871

The Hidden Half: How the World Conceals Its Secrets by Michael Blastland

air freight, Alfred Russel Wallace, banking crisis, Bayesian statistics, Berlin Wall, central bank independence, cognitive bias, complexity theory, Deng Xiaoping, Diane Coyle, Donald Trump, epigenetics, experimental subject, full employment, George Santayana, hindsight bias, income inequality, manufacturing employment, mass incarceration, meta analysis, meta-analysis, minimum wage unemployment, nudge unit, oil shock, p-value, personalized medicine, phenotype, Ralph Waldo Emerson, random walk, randomized controlled trial, replication crisis, Richard Thaler, selection bias, the map is not the territory, the scientific method, The Wisdom of Crowds, twin studies

Once the data is in and we think we do see evidence, say, that drinking more tea seems to be associated with having more babies, we ask: ‘How likely is it that we would see these results if our null hypothesis was true, that is, if there was in fact no relationship?’ If the chance of our observation is less than 5% (or p < 0.05 as it is usually written, known as a p-value), then this is considered an acceptable level at which to reject the null hypothesis. It is not a proof, or a test of the ‘truth’ of an experimental result, it is a probabilistic test of there being nothing, given what we’ve observed. A p-value of less than 0.05 has become many researchers’ heart’s desire. Statistical significance sounds cumbersome, but it is a workhorse of statistical inquiry. To those outside statistics, it’s amazing to discover there’s a war between its proponents and critics. Critics call it ‘statistical alchemy’ and would like to do away with it.33 The essence of their complaint is that results that could arise simply by chance are too easily turned into ‘findings’ when given the stamp of approval by a test of statistical significance.

His published papers are also surprisingly readable. 29 The test of statistical significance with p-values <0.05, whenever statistical significance is reported. 30 John P. A. Ioannidis, T. D. Stanley and Hristos Doucouliagos, ‘The Power of Bias in Economics Research’, Economic Journal, vol. 127, 2017, F236–F265. 31 Colin F. Camerer et al., ‘Evaluating Replicability of Laboratory Experiments in Economics’, Science, 25 March 2016. 32 Monya Baker, ‘1,500 Scientists Lift the Lid on Reproducibility: Survey Sheds Light on the “Crisis” Rocking Research’, Nature (News and Comment), vol. 534, 25 May 2016. 33 See, for example, Blakeley B. McShane et al., ‘Abandon Statistical Significance’, 2017 (arXiv:1709.07588), which refers to it as ‘uncertainty laundering’. Another good summary of the issues from a critical perspective is Regina Nuzzo, ‘P Values, the “Gold Standard” of Statistical Validity, Are Not as Reliable as Many Scientists Assume’, Nature, vol. 506, 12 February 2014. 34 See the work of Deborah Mayo for a robust but inevitably technical defence of what’s known as the frequentist case, via her website ‘Error Statistics’. 35 See Marcus R.

Critics call it ‘statistical alchemy’ and would like to do away with it.33 The essence of their complaint is that results that could arise simply by chance are too easily turned into ‘findings’ when given the stamp of approval by a test of statistical significance. Of course, all methods can be misused; whether the misuse discredits the test, I’ll leave to others.34 I confess to retaining a cautious interest in p-values, but agree – with modest statistical understanding – that dependence on them as one-off, binary tests often seems to have been simplistic. A single path to knowledge is often not enough. That sounds like the utterance of a mountain-top mystic, but in the context of research it has become a pragmatic necessity to follow more than one path (where possible), to make sure they lead to the same destination.


pages: 301 words: 85,263

New Dark Age: Technology and the End of the Future by James Bridle

AI winter, Airbnb, Alfred Russel Wallace, Automated Insights, autonomous vehicles, back-to-the-land, Benoit Mandelbrot, Bernie Sanders, bitcoin, British Empire, Brownian motion, Buckminster Fuller, Capital in the Twenty-First Century by Thomas Piketty, carbon footprint, cognitive bias, cognitive dissonance, combinatorial explosion, computer vision, congestion charging, cryptocurrency, data is the new oil, Donald Trump, Douglas Engelbart, Douglas Engelbart, Douglas Hofstadter, drone strike, Edward Snowden, fear of failure, Flash crash, Google Earth, Haber-Bosch Process, hive mind, income inequality, informal economy, Internet of things, Isaac Newton, John von Neumann, Julian Assange, Kickstarter, late capitalism, lone genius, mandelbrot fractal, meta analysis, meta-analysis, Minecraft, mutually assured destruction, natural language processing, Network effects, oil shock, p-value, pattern recognition, peak oil, recommendation engine, road to serfdom, Robert Mercer, Ronald Reagan, self-driving car, Silicon Valley, Silicon Valley ideology, Skype, social graph, sorting algorithm, South China Sea, speech recognition, Spread Networks laid a new fibre optics cable between New York and Chicago, stem cell, Stuxnet, technoutopianism, the built environment, the scientific method, Uber for X, undersea cable, University of East Anglia, uranium enrichment, Vannevar Bush, WikiLeaks

The most controversial of these techniques is p-hacking. P stands for probability, denoting the value at which an experimental result can be considered statistically significant. The ability to calculate a p-value in many different situations has made it a common marker for scientific rigour in experiments. A value of p less than 0.05 – meaning that there is a less than 5 per cent chance of a correlation being the result of chance, or a false positive – is widely agreed across many disciplines to be the benchmark for a successful hypothesis. But the result of this agreement is that a p-value less than 0.05 becomes a target, rather than a measure. Researchers, given a particular goal to aim for, can selectively cull from great fields of data in order to prove any particular hypothesis. As an example of how p-hacking works, let’s hypothesise that green dice, uniquely among all other dice, are loaded.

As an example of how p-hacking works, let’s hypothesise that green dice, uniquely among all other dice, are loaded. Take ten green dice and roll each of them one hundred times. Of those 1,000 rolls, 183 turn up a six. If the dice were absolutely fair, the number of sixes should be 1,000/6, which is 167. Something’s up. In order to determine the validity of the experiment, we need to calculate the p-value of our experiment. But the p-value has nothing to do with the actual hypothesis: it is simply the probability that random rolls would turn up 183 or more sixes. For 1,000 dice rolls, that probability is only 4 per cent, or p = 0.04 – and just like that, we have an experimental result that is deemed sufficient by many scientific communities to warrant publication.15 Why should such a ridiculous process be regarded as anything other than a gross simplification?

Data dredging has become particularly notorious in the social sciences, where social media and other sources of big behavioural data have suddenly and vastly increased the amount of information available to researchers. But the pervasiveness of p-hacking isn’t limited to the social sciences. A comprehensive analysis of 100,000 open access papers in 2015 found evidence of p-hacking across multiple disciplines.16 The researchers mined the papers for every p-value they could find, and they discovered that the vast majority just scraped under the 0.05 boundary – evidence, they said, that many scientists were adjusting their experimental designs, data sets, or statistical methods in order to get a result that crossed the significance threshold. It was results such as these that led the editor of PLOS ONE, a leading medical journal, to publish an editorial attacking statistical methods in research entitled ‘Why most published research findings are false.’17 It’s worth emphasising at this point that data dredging is not the same as fraud.


pages: 197 words: 35,256

NumPy Cookbook by Ivan Idris

business intelligence, cloud computing, computer vision, Debian, en.wikipedia.org, Eratosthenes, mandelbrot fractal, p-value, sorting algorithm, statistical model, transaction costs, web application

Again, we will calculate the log returns of the close price of this stock, and use that as an input for the normality test function. This function returns a tuple containing a second element—a p-value between zero and one. The complete code for this tutorial is as follows: import datetime import numpy from matplotlib import finance from statsmodels.stats.adnorm import normal_ad import sys #1. Download price data # 2011 to 2012 start = datetime.datetime(2011, 01, 01) end = datetime.datetime(2012, 01, 01) print "Retrieving data for", sys.argv[1] quotes = finance.quotes_historical_yahoo(sys.argv[1], start, end, asobject=True) close = numpy.array(quotes.close).astype(numpy.float) print close.shape print normal_ad(numpy.diff(numpy.log(close))) The following shows the output of the script with p-value of 0.13: Retrieving data for AAPL (252,) (0.57103805516803163, 0.13725944999430437) How it works...

Download price data # 2011 to 2012 start = datetime.datetime(2011, 01, 01) end = datetime.datetime(2012, 01, 01) print "Retrieving data for", sys.argv[1] quotes = finance.quotes_historical_yahoo(sys.argv[1], start, end, asobject=True) close = numpy.array(quotes.close).astype(numpy.float) print close.shape print normal_ad(numpy.diff(numpy.log(close))) The following shows the output of the script with p-value of 0.13: Retrieving data for AAPL (252,) (0.57103805516803163, 0.13725944999430437) How it works... This recipe demonstrated the Anderson Darling statistical test for normality, as found in scikits-statsmodels. We used the stock price data, which does not have a normal distribution, as input. For the data, we got a p-value of 0.13. Since probabilities range between zero and one, this confirms our hypothesis. Installing scikits-image scikits image is a toolkit for image processing, which requires PIL, SciPy, Cython, and NumPy. There are Windows installers available for it. It is part of Enthought Python Distribution, as well as the Python(x, y) distribution. How to do it... As usual, we can install using either of the following two commands: pip install -U scikits-image easy_install -U scikits-image Again, you might need to run these commands as root.


pages: 755 words: 121,290

Statistics hacks by Bruce Frey

Bayesian statistics, Berlin Wall, correlation coefficient, Daniel Kahneman / Amos Tversky, distributed generation, en.wikipedia.org, feminist movement, G4S, game design, Hacker Ethic, index card, Milgram experiment, p-value, place-making, reshoring, RFID, Search for Extraterrestrial Intelligence, SETI@home, Silicon Valley, statistical model, Thomas Bayes

Power In social science research, a statistical analysis frequently determines whether a certain value observed in a sample is likely to have occurred by chance. This process is called a test of significance. Tests of significance produce a p-value (probability value), which is the probability that the sample value could have been drawn from a particular population of interest. The lower the p-value, the more confident we are in our beliefs that we have achieved statistical significance and that our data reveals a relationship that exists not only in our sample but also in the whole population represented by that sample. Usually, a predetermined level of significance is chosen as a standard for what counts. If the eventual p-value is equal to or lower than that predetermined level of significance, then the researcher has achieved a level of significance. Statistical analyses and tests of significance are not limited to identifying relationships among variables, but the most common analyses (t tests, F tests, chi-squares, correlation coefficients, regression equations, etc.) usually serve this purpose.

The power of a statistical test is the probability that, given that there is a relationship among variables in the population, the statistical analysis will result in the decision that a level of significance has been achieved. Notice this is a conditional probability. There must be a relationship in the population to find; otherwise, power has no meaning. Power is not the chance of finding a significant result; it is the chance of finding that relationship if it is there to find. The formula for power contains three components: Sample size The predetermined level of significance (p-value) to beat (be less than) The effect size (the size of the relationship in the population) Conducting a Power Analysis Let's say we want to compare two different sample groups and see whether they are different enough that there is likely a real difference in the populations they represent. For example, suppose you want to know whether men or women sleep more. The design is fairly straightforward.

magic number, lotteries and MANOVA (multivariate analysis of variance) MCAT (Medical College Admission Test) mean [See also standard error of the mean] ACT calculating Central Limit Theorem central tendency and cut score and 2nd defined 2nd effect size and linear regression and normal curve and 2nd normal distribution precision of predicting test performance 2nd regression toward 2nd 3rd T scores z score 2nd 3rd measurement [See also standard error of measurement] <Emphasis>t</> tests asking questions categorical converting raw scores defined effect of increasing sample size Gott's Principle graphs and improving test scores levels of 2nd normal distribution percentile ranks precise predicting with normal curve probability characteristics reliability of standardized scores 2nd testing fairly validity of 2nd 3rd measures of central tendency median central tendency and 2nd 3rd defined normal curve and medical decisions Michie, Donald Microsoft Excel DATAS software histograms predicting football games Milgram, Stanley 2nd 3rd 4th mind control Minnesota Multiphase Personality Inventory-II test mnemonic devices mode central tendency and 2nd defined normal curve and models building 2nd defined goodness-of-fit statistic and money casinos and 2nd infinite doubling of Monopoly Monty Hall problem multiple choice questions analysis of answer options writing good 2nd 3rd multiple regression criterion variables and defined multiple predictor variables predicting football games multiple regression) multiplicative rule 2nd multivariate analysis of variance (MANOVA) mutually exclusive outcomes Index [SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [V] [W] [X] [Y] [Z] negative correlation 2nd negative numbers negative wording Newcomb, Simon 2nd Nigrini, Mark 2nd 3rd 4th 5th 6th nominal level of measurement 2nd 3rd non-experimental designs norm-referenced scoring defined 2nd percentile ranks simplicity of normal curve Central Limit Theorem and overview precision of predicting with z score and 2nd normal distribution applying characteristics iTunes shuffle and overview shape of traffic patterns null hypothesis defined errors in testing Law of Large Numbers and possible outcomes purpose 2nd 3rd research hypothesis and statistical significance and nuts 2nd Index [SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [V] [W] [X] [Y] [Z] O'Reilly Media 2nd observed score 2nd 3rd odds [See also odds: (see also gambling\\] [See also odds: (see also gambling\\] figuring out 2nd pot odds 2nd Powerball lottery one-way chi-square test ordering scores ordinal level of measurement outcomes blackjack 2nd coin toss comparing number of possible 2nd dice rolls 2nd gambler's fallacy about identifying unexpected likelihood of 2nd mutually exclusive occurrence of specific 2nd predicting 2nd predicting baseball games shuffled deck of cards spotting random trial-and-error learning two-point conversion chart and outs Index [SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [V] [W] [X] [Y] [Z] p-values pairs of cards, counting by parallel forms reliability partial correlations Party Shuffle (iTunes) 2nd 3rd 4th Pascal's Triangle Pascal, Blaise passing epochs payoffs expected 2nd magic number for lotteries Powerball lottery Pearson correlation coefficient 2nd Pedrotti, J.T. percentages ratio level of measurement sample estimates of scores percentile ranks 2nd performance criterion-based arguments ranking players permutations 2nd 3rd Petersen, S.E.


Risk Management in Trading by Davis Edwards

asset allocation, asset-backed security, backtesting, Black-Scholes formula, Brownian motion, business cycle, computerized trading, correlation coefficient, Credit Default Swap, discrete time, diversified portfolio, fixed income, implied volatility, intangible asset, interest rate swap, iterative process, John Meriwether, London Whale, Long Term Capital Management, margin call, Myron Scholes, Nick Leeson, p-value, paper trading, pattern recognition, random walk, risk tolerance, risk/return, selection bias, shareholder value, Sharpe ratio, short selling, statistical arbitrage, statistical model, stochastic process, systematic trading, time value of money, transaction costs, value at risk, Wiener process, zero-coupon bond

Number of observations must be greater than 30 ANOVA Regression Residual Total Intercept x Variable 1 df SS MS 1 509 510 0.07406 0.01863 0.09269 0.07406 0.00004 F Coefficients Standard Error t Stat 0.0001 0.2787 0.0003 0.8002 44.9799 0.0178 Test 1. Slope must be between 0.80 and 1.25 FIGURE 7.5 Significance F 2023.19472 P-Value 0.7806 0.0000 0.0000 Lower 95% −0.0005 0.7653 Test 5. p-value of F-statistic must be less than 0.05 Upper 95% Lower 95.0% Upper 95.0% 0.0006 −0.0005 0.0006 0.8352 0.7653 0.8352 Test 4. p-value of t-statistic must be less than 0.05 Regression Output hedge documentation memo must pass the retrospective tests. In this example, five tests have been identified, and the hedge‐effectiveness test would fail because the adjusted R2 of 0.7986 is less than the 0.80 required for passing result. (See Figure 7.5, Regression Output.)

This test ensures that changes in the hedge and hedged item largely offset. Test 2 (R‐Squared). An R2 test indicates how well observations fit a line or curve. A common test is to ensure an R2 greater than 0.80. Test 3 (Slope Significance). It is common to test that the slope test is mathematically significant. This can be done by checking that the p‐ value of the F‐statistic is less than 0.05. Test 4 (R2 Significance). It is common to test that the R2 test is significant. This can be done by checking that the p‐value of the t‐statistic is less than 0.05. Test 5 (Number of Samples). A sufficient number of samples needs to be taken for a valid test. For most situations, this means 30 or more samples. Generally, a statistical package in a spreadsheet is used to calculate effectiveness. To pass a hedge‐effectiveness test, all of the tests defined in the KEY CONCEPT: EFFECTIVENESS DEPENDS ON CHANGES IN VALUE A hedge is effective if the changes in the hedge match changes in the asset.


No Slack: The Financial Lives of Low-Income Americans by Michael S. Barr

active measures, asset allocation, Bayesian statistics, business cycle, Cass Sunstein, conceptual framework, Daniel Kahneman / Amos Tversky, financial exclusion, financial innovation, Home mortgage interest deduction, income inequality, information asymmetry, labor-force participation, late fees, London Interbank Offered Rate, loss aversion, market friction, mental accounting, Milgram experiment, mobile money, money market fund, mortgage debt, mortgage tax deduction, New Urbanism, p-value, payday loans, race to the bottom, regulatory arbitrage, Richard Thaler, risk tolerance, Robert Shiller, Robert Shiller, the payments system, transaction costs, unbanked and underbanked, underbanked

Withholding Preference, by Portfolio Allocation Group a Percent Prefers to overwithhold paycheck rather than underwithhold or exactly withhold All Mostly illiquid assets Mostly liquid assets One illiquid asset One ­liquid asset No assets 0.685 (0.027) 0.761 (0.037) 0.623 (0.048) 0.599 (0.073) 0.713 (0.047) 0.618 (0.045) F statistic = 3.43, p value = 0.013 Summary statistic Sample size 650 220 145 38 117 130 Source: Detroit Area Household Financial Services study. a. Standard errors are in parentheses. Sample includes respondents living in low- and moderate-income census tracts who filed a tax return in 2003 or 2004. The F statistic and p value correspond to a test of equality of the percentages. The F statistic is distributed with 4 numerator and 70 denominator degrees of freedom. Standard errors are clustered at the segment level. tax filers were utility maximizing and making their saving decisions in line with the permanent-income hypothesis, few, if any, would express that they want to save through the tax system.

*Statistically significance at the 10 percent level, two-tailed test. **Statistically significance at the 5 percent level, two-tailed test. ***Statistically significance at the 1 percent level, two-tailed test. Controls Demographics Employment and financial variables Income volatility Household income Risk tolerance Time preference Ease of borrowing $500 Gets refund Summary statistic F statistic p value One liquid asset One illiquid asset Mostly liquid assets Mostly illiquid assets (1) Dependent variable is “wants to overwithhold” Table 10-4. Relationship between Withholding Preference and Asset Allocationa 232 michael s. barr and jane k. dokko and it is more so for those facing greater income volatility. The results in columns 4 and 5 support the rejection of precautionary explanations for the heterogeneity in wanting to overwithhold across the portfolio allocation groups.

*Statistically significant at the 10 percent level, two-tailed test. **Statistically significant at the 5 percent level, two-tailed test. ***Statistically significant at the 1 percent level, two-tailed test. Controls Demographics Employment and financial variables Income volatility Household income Risk tolerance Time preference Ease of borrowing $500 Gets refund Summary statistic F statistic p value Has one liquid asset Has one illiquid asset Mostly liquid assets Mostly illiquid assets (2) (1) Dependent variable is “spends all” Table 10-6. Relationship between Spending All of Tax Refund and Asset Allocation a 12864-10_CH10_3rdPgs.indd 238 3/23/12 11:57 AM No No No No No No No Yes 6.274 (0) Yes No No No No No No Yes 4.637 (0.002) (2) -0.108 (0.066) -0.192*** (0.060) 0.029 (0.096) -0.021 (0.060) (1) -0.144*** (0.055) -0.216*** (0.052) 0.026 (0.093) -0.033 (0.058) (3) Yes Yes No No No No No Yes 4.067 (0.005) -0.100 (0.065) -0.182*** (0.057) 0.026 (0.097) -0.021 (0.058) (4) Yes Yes Yes No No No No Yes 3.948 (0.006) -0.096 (0.065) -0.177*** (0.055) 0.039 (0.095) -0.016 (0.056) (5) Yes Yes Yes Yes No No No Yes 3.935 (0.006) -0.107* (0.065) -0.186*** (0.057) 0.041 (0.096) -0.014 (0.057) (6) Yes Yes Yes No Yes Yes No Yes 4.704 (0.002) -0.094 (0.065) -0.185*** (0.055) 0.048 (0.095) -0.004 (0.054) (7) Yes Yes Yes Yes Yes Yes No Yes 4.799 (0.002) -0.107 (0.066) -0.196*** (0.057) 0.050 (0.096) 0 (0.055) (8) Yes Yes Yes Yes Yes Yes Yes Yes 4.836 (0.002) -0.108 (0.066) -0.198*** (0.057) 0.050 (0.096) -0.001 (0.055) Source: Detroit Area Household Financial Services study. a.


pages: 420 words: 130,714

Science in the Soul: Selected Writings of a Passionate Rationalist by Richard Dawkins

agricultural Revolution, Alfred Russel Wallace, anthropic principle, Any sufficiently advanced technology is indistinguishable from magic, Boris Johnson, David Attenborough, Donald Trump, double helix, Drosophila, epigenetics, Fellow of the Royal Society, Google Earth, John Harrison: Longitude, Kickstarter, lone genius, Mahatma Gandhi, mental accounting, Necker cube, nuclear winter, out of africa, p-value, phenotype, place-making, placebo effect, random walk, Ray Kurzweil, Richard Feynman, Search for Extraterrestrial Intelligence, stem cell, Stephen Hawking, Steve Wozniak, Steven Pinker, the scientific method, twin studies

Could it just be luck? Statistical tests exist to compute the probability that, if the drug really were doing nothing, you could have got the result you did get (or an even ‘better’ result) by luck. The ‘P value’ is that probability, and the lower it is, the less likely the result is to have been a matter of luck. Results with P values of 1 per cent or less are customarily taken as evidence, but that cut-off point is arbitrary. P values of 5 per cent may be taken as suggestive. For results that seem very surprising, for example an apparent demonstration of telepathic communication, a P value much lower than 1 per cent would be demanded.e demanded. demanded.demanded.emanded.manded.anded.nded.ded.ed.d.. *4 In a wild state, that is, where refined sugar doesn’t exist except in the rare and painfully won case of honey.

The polished pebble has far fewer coadapted features: the coincidence of transparency, high refractive index and mechanical forces that polish the surface in a curved shape. The odds against such a threefold coincidence are not particularly great. No special explanation is called for. Compare how a statistician decides what P value*3 to accept as evidence for an effect in an experiment. It is a matter of judgement and dispute, almost of taste, exactly when a coincidence becomes too great to stomach. But, no matter whether you are a cautious statistician or a daring statistician, there are some complex adaptations whose ‘P value’, whose coincidence rating, is so impressive that nobody would hesitate to diagnose life (or an artefact designed by a living thing). My definition of living complexity is, in effect, ‘that complexity which is too great to have come about through coincidence’.


pages: 276 words: 81,153

Outnumbered: From Facebook and Google to Fake News and Filter-Bubbles – the Algorithms That Control Our Lives by David Sumpter

affirmative action, Bernie Sanders, correlation does not imply causation, crowdsourcing, don't be evil, Donald Trump, Elon Musk, Filter Bubble, Google Glasses, illegal immigration, Jeff Bezos, job automation, Kenneth Arrow, Loebner Prize, Mark Zuckerberg, meta analysis, meta-analysis, Minecraft, Nate Silver, natural language processing, Nelson Mandela, p-value, prediction markets, random walk, Ray Kurzweil, Robert Mercer, selection bias, self-driving car, Silicon Valley, Skype, Snapchat, speech recognition, statistical model, Stephen Hawking, Steven Pinker, The Signal and the Noise by Nate Silver, traveling salesman, Turing test

I split the data into a training set (90 per cent of observations) and a test set (10 per cent of observations). Using the training set, I found that the logistic model that best predicted recidivism was based on age (bage=-0.047; P-value  2e-16) and number of priors (bpriors= 0.172; P-value  2e-16), combined with a constant (bconst=0.885; P-value  2e-16). This implies that older defendants are less likely to be arrested for further crimes, while those with more priors are more likely to be arrested again. Race was not a statistically significant predictor of recidivism (in a multivariate model including race, an African American factor had P-value = 0.427). 12 The most comprehensive of these is Flores, A. W., Bechtel, K. and Lowenkamp, C. T. 2016. ‘False Positives, False Negatives, and False Analyses: A Rejoinder to Machine Bias: There’s Software Used across the Country to Predict Future Criminals.


pages: 306 words: 82,765

Skin in the Game: Hidden Asymmetries in Daily Life by Nassim Nicholas Taleb

availability heuristic, Benoit Mandelbrot, Bernie Madoff, Black Swan, Brownian motion, Capital in the Twenty-First Century by Thomas Piketty, Cass Sunstein, cellular automata, Claude Shannon: information theory, cognitive dissonance, complexity theory, David Graeber, disintermediation, Donald Trump, Edward Thorp, equity premium, financial independence, information asymmetry, invisible hand, knowledge economy, loss aversion, mandelbrot fractal, mental accounting, microbiome, moral hazard, Murray Gell-Mann, offshore financial centre, p-value, Paul Samuelson, Ponzi scheme, price mechanism, principal–agent problem, Ralph Nader, random walk, rent-seeking, Richard Feynman, Richard Thaler, Ronald Coase, Ronald Reagan, Rory Sutherland, Silicon Valley, Steven Pinker, stochastic process, survivorship bias, The Nature of the Firm, transaction costs, urban planning, Yogi Berra

Empirically, if you want an author to cross a few generations, make sure he or she never gets that something called the Nobel Prize in Literature. fn4 I am usually allergic to some public personalities, but not others. It took me a while to figure out how to draw the line explicitly. The difference is risk-taking and whether the person worries about his or her reputation. fn5 In a technical note called “Meta-distribution of p-values” around the stochasticity of “p-values” and their hacking by researchers, I show that the statistical significance of these papers is at least one order of magnitude smaller than claimed. fn6 Segnius homines bona quam mala sentiunt. fn7 Nimium boni est, cui hinil est mali. fn8 Non scabat caput praeter unges tuo, Ma biikkak illa ifrak. fn9 xasfour bil ‘id asan min xaṡra xalṡajra. fn10 Nimium allercando veritas amittitur.

The IYI has been wrong, historically, about Stalinism, Maoism, GMOs, Iraq, Libya, Syria, lobotomies, urban planning, low carbohydrate diets, gym machines, behaviorism, trans-fats, Freudianism, portfolio theory, linear regression, HFCS (High-Fructose Corn Syrup), Gaussianism, Salafism, dynamic stochastic equilibrium modeling, housing projects, marathon running, selfish genes, election-forecasting models, Bernie Madoff (pre-blowup), and p-values. But he is still convinced that his current position is right.fn1 NEVER GOTTEN DRUNK WITH RUSSIANS The IYI joins a club to get travel privileges; if he is a social scientist, he uses statistics without knowing how they are derived (like Steven Pinker and psycholophasters in general); when in the United Kingdom, he goes to literary festivals and eats cucumber sandwiches, taking small bites at a time; he drinks red wine with steak (never white); he used to believe that dietary fat was harmful and has now completely reversed himself (information in both cases is derived from the same source); he takes statins because his doctor told him to do so; he fails to understand ergodicity, and, when explained to him, he forgets about it soon after; he doesn’t use Yiddish words even when talking business; he studies grammar before speaking a language; he has a cousin who worked with someone who knows the Queen; he has never read Frédéric Dard, Libanius Antiochus, Michael Oakeshott, John Gray, Ammianus Marcellinus, Ibn Battuta, Saadia Gaon, or Joseph de Maistre; he has never gotten drunk with Russians; he never drinks to the point where he starts breaking glasses (or, preferably, chairs); he doesn’t even know the difference between Hecate and Hecuba (which in Brooklynese is “can’t tell sh**t from shinola”); he doesn’t know that there is no difference between “pseudointellectual” and “intellectual” in the absence of skin in the game; he has mentioned quantum mechanics at least twice in the past five years in conversations that had nothing to do with physics.


Debtor Nation: The History of America in Red Ink (Politics and Society in Modern America) by Louis Hyman

asset-backed security, bank run, barriers to entry, Bretton Woods, business cycle, card file, central bank independence, computer age, corporate governance, credit crunch, declining real wages, deindustrialization, diversified portfolio, financial independence, financial innovation, fixed income, Gini coefficient, Home mortgage interest deduction, housing crisis, income inequality, invisible hand, late fees, London Interbank Offered Rate, market fundamentalism, means of production, mortgage debt, mortgage tax deduction, p-value, pattern recognition, profit maximization, profit motive, risk/return, Ronald Reagan, Silicon Valley, statistical model, technology bubble, the built environment, transaction costs, union organizing, white flight, women in the workforce, working poor, zero-sum game

., 1940–1950,” box 4, Commissioner’s Correspondence and subject file, 1938–1958, RG 31 Records of the Federal Housing Administration, NARA, 6. 46. Kenneth Wells to Guy T.O. Holladay, June 24, 1953, folder “Minority Group Housing – Printed Material, Speeches, Field Letters, Etc., 1940–1950,” box 4, Commissioner’s Correspondence and subject file, 1938–1958, RG 31 Records of the Federal Housing Administration, NARA. 47. P-value = 0.0001. 48. Once again, race had a p-value of > 0.586. The racial co-efficient, moreover, dropped to only a little over $500. 49. Linear regression with mortgage-having subpopulation for mortgage amount, race (P > 0.586) was not significant, and location (P > 0.006) was. NOTES TO CHAPTER 5 335 50. Pearson test for suburban dummy variable was (P > 0.42). 51. Linear regression of mortgage controlling for race (P > 0.269), location (P > 0.019), federal loan status (P > 0.003), and income (P > 0.000).

The most important statistical advances made since the late 1950s, for the purposes of this analysis, are the ability to adjust for the internal correlation of primary sampling units, logistic regression, and censored normal regressions—all of which are used in this chapter, especially the first two mentioned. In terms of questions, this chapter pays far greater attention to the intersections of race, class, and location than the original published survey, which was mostly a collection of bar graphs and averages. For the less technically inclined reader, explanations of NOTES TO CHAPTER 5 331 some of the statistical methods will be in the notes. For the more technically inclined reader, p-values of relevant tests and regressions have generally been put in the notes. 3. William H. Whyte, “Budgetism: Opiate of the Middle Class,” Fortune (May 1956), 133, 136–37. 4. John Lebor, “Requirements for Profitable Credit Selling,” Credit Management Year Book 1959–1960 (New York: National Retail Dry Goods Association, 1959), 12. 5. Malcolm McNair, “Changing Retail Scene and What Lies Ahead,” National Retail Merchants Association Convention Speech, January 8, 1962, Historical Collections, BAK, 12. 6.

See Melvin Oliver’s Black Wealth, White Wealth: A New Perspective on Racial Inequality (New York, Routledge, 1995), for more on the importance of wealth inequality compared to income inequality today. As discussed later in the chapter, at the same income levels, African Americans always borrowed more frequently than whites and had lower wealth levels. 22. This was determined by running a series of regressions on debt and liquid assets, while controlling for location, mortgage status, marital status, and income. P-values for liquid assets in all models (P > 0.00). For whites, the model had R2 = 0.12 and for whites R2 = 0.41. 23. Odds ratio 5.42 with (P > 0.01) [1.44, 20.41]. 24. A linear regression with a suburban debtor subpopulation shows race (P > 0.248) and liquid assets (P > 0.241) to have no relationship to the amount borrowed unlike mortgage status (P > 0.000) and income (P > 0.013). 25. Suburban dummy variable for black households with (P > 0.02).


pages: 543 words: 153,550

Model Thinker: What You Need to Know to Make Data Work for You by Scott E. Page

"Robert Solow", Airbnb, Albert Einstein, Alfred Russel Wallace, algorithmic trading, Alvin Roth, assortative mating, Bernie Madoff, bitcoin, Black Swan, blockchain, business cycle, Capital in the Twenty-First Century by Thomas Piketty, Checklist Manifesto, computer age, corporate governance, correlation does not imply causation, cuban missile crisis, deliberate practice, discrete time, distributed ledger, en.wikipedia.org, Estimating the Reproducibility of Psychological Science, Everything should be made as simple as possible, experimental economics, first-price auction, Flash crash, Geoffrey West, Santa Fe Institute, germ theory of disease, Gini coefficient, High speed trading, impulse control, income inequality, Isaac Newton, John von Neumann, Kenneth Rogoff, knowledge economy, knowledge worker, Long Term Capital Management, loss aversion, low skilled workers, Mark Zuckerberg, market design, meta analysis, meta-analysis, money market fund, Nash equilibrium, natural language processing, Network effects, p-value, Pareto efficiency, pattern recognition, Paul Erdős, Paul Samuelson, phenotype, pre–internet, prisoner's dilemma, race to the bottom, random walk, randomized controlled trial, Richard Feynman, Richard Thaler, school choice, sealed-bid auction, second-price auction, selection bias, six sigma, social graph, spectrum auction, statistical model, Stephen Hawking, Supply of New York City Cabdrivers, The Bell Curve by Richard Herrnstein and Charles Murray, The Great Moderation, The Rise and Fall of American Growth, the rule of 72, the scientific method, The Spirit Level, The Wisdom of Crowds, Thomas Malthus, Thorstein Veblen, urban sprawl, value at risk, web application, winner-take-all economy, zero-sum game

Sign, Significance, and Magnitude Linear regression tells us the following about coefficients of independent variables: Sign: The correlation, positive or negative, between the independent variable and the dependent variable. Significance (p-value): The probability that the sign on the coefficient is nonzero. Magnitude: The best estimate of the coefficient of the independent variable. In a single-variable regression, the closer fit to the line and the more data, the more confidence we can place in the sign and magnitude of the coefficient. Statisticians characterize the significance of a coefficient using its p-value, which equals the probability, based on the regression, that the coefficient is not zero. A p-value of 5% means a one-in-twenty chance that the data were generated by a process where the coefficient equals zero. The standard thresholds for significance are 5% (denoted by *) and 1% (denoted by **).

payoffs in, 261 probabilities contact, 135 diffusion, 135 sharing, 135 transition, 190, 191 product competition, hybrid model of, 238–240 production function, 101 program trading, 225 property rights, 104 proposer effects, 236 (fig.) prospect theory, defining, 52 psychological biases, in rational-actor model, 51–53 public goods, 272–275 public projects decision problems, 292 mechanisms for, 292–294 pure coordination games, 174 pure exchange economies, 186–187 p-value, 85–86 quality and degree network formation, 123 quantity, 30 quantum computing, 80 Race to the Bottom, 2, 181, 182 radial symmetry, 233 random, 147 random friends, 125, 126 random mixing, 135 random networks, 122 (fig.) Monte Carlo method for, 121 random walk models, 155–158 efficient markets and, 159–161 network size and, 158–159, 158 (fig.) normal, 156 simple, 155, 156 (fig.) rational actors, 43, 45, 56 rational choice arguments for, 50 benchmarks and, 50 consistency and, 50 learning and, 50 stakes and, 50 uniqueness and, 50 rational-actor model, 10, 11 beliefs in, 48 benchmarks in, 51 completeness in, 49 consistency in, 51 of consumption, 48 continuity in, 49 defining, 47–48 independence in, 49 psychological biases in, 51–53 transitivity in, 49 rationality, 45 individual, 293 realism, 14 messiness and, 50 reason, in REDCAPE, 15–18 rectangular grid network, 139 REDCAPE, 13, 355 communication in, 20–21 defining, 15 design in, 20 explanation and, 19 exploration and, 24–25 prediction and, 23—24 reason in, 15–18 regression line, 85 (fig.)


Exploring Everyday Things with R and Ruby by Sau Sheong Chang

Alfred Russel Wallace, bioinformatics, business process, butterfly effect, cloud computing, Craig Reynolds: boids flock, Debian, Edward Lorenz: Chaos theory, Gini coefficient, income inequality, invisible hand, p-value, price stability, Ruby on Rails, Skype, statistical model, stem cell, Stephen Hawking, text mining, The Wealth of Nations by Adam Smith, We are the 99%, web application, wikimedia commons

Without going in depth into the mathematics of this test (which would probably fill up a whole section, if not an entire chapter, on its own), let’s examine the initial population by assuming that the population is normally distributed and running the Shapiro-Wilk test on it: > data <- read.table("money.csv", header=F, sep=",") > row <- as.vector(as.matrix(data[1,])) > row [1] 56 79 66 74 96 54 91 59 70 95 65 82 64 80 63 68 69 69 72 89 64 53 87 49 [47] 68 66 80 89 57 73 72 82 76 58 57 78 94 73 83 52 75 71 52 57 76 59 63 ... > shapiro.test(row) Shapiro-Wilk normality test data: row W = 0.9755, p-value = 0.3806 > As you can see, the p-value is 0.3806, which (on a a scale of 0.0 to 1.0) is not small, and therefore the null hypothesis is not rejected. The null hypothesis is that of no change (i.e., the assumption that the distribution is normal). Strictly speaking, this doesn’t really prove that the distribution is normal, but a visual inspection of the first histogram chart in Figure 8-3 tells us that the likelihood of a normal distribution is high.


pages: 1,065 words: 229,099

Real World Haskell by Bryan O'Sullivan, John Goerzen, Donald Stewart, Donald Bruce Stewart

bash_history, database schema, Debian, distributed revision control, domain-specific language, en.wikipedia.org, Firefox, general-purpose programming language, Guido van Rossum, job automation, Larry Wall, lateral thinking, p-value, plutocrats, Plutocrats, revision control, sorting algorithm, transfer pricing, type inference, web application

With this p_series function, parsing an array is simple: -- file: ch16/JSONParsec.hs p_array :: CharParser () (JAry JValue) p_array = JAry <$> p_series '[' p_value ']' Dealing with a JSON object is hardly more complicated, requiring just a little additional effort to produce a name/value pair for each of the object’s fields: -- file: ch16/JSONParsec.hs p_object :: CharParser () (JObj JValue) p_object = JObj <$> p_series '{' p_field '}' where p_field = (,) <$> (p_string <* char ':' <* spaces) <*> p_value Parsing an individual value is a matter of calling an existing parser, and then wrapping its result with the appropriate JValue constructor: -- file: ch16/JSONParsec.hs p_value :: CharParser () JValue p_value = value <* spaces where value = JString <$> p_string <|> JNumber <$> p_number <|> JObject <$> p_object <|> JArray <$> p_array <|> JBool <$> p_bool <|> JNull <$ string "null" <?

With this p_series function, parsing an array is simple: -- file: ch16/JSONParsec.hs p_array :: CharParser () (JAry JValue) p_array = JAry <$> p_series '[' p_value ']' Dealing with a JSON object is hardly more complicated, requiring just a little additional effort to produce a name/value pair for each of the object’s fields: -- file: ch16/JSONParsec.hs p_object :: CharParser () (JObj JValue) p_object = JObj <$> p_series '{' p_field '}' where p_field = (,) <$> (p_string <* char ':' <* spaces) <*> p_value Parsing an individual value is a matter of calling an existing parser, and then wrapping its result with the appropriate JValue constructor: -- file: ch16/JSONParsec.hs p_value :: CharParser () JValue p_value = value <* spaces where value = JString <$> p_string <|> JNumber <$> p_number <|> JObject <$> p_object <|> JArray <$> p_array <|> JBool <$> p_bool <|> JNull <$ string "null" <?> "JSON value" p_bool :: CharParser () Bool p_bool = True <$ string "true" <|> False <$ string "false" The choice combinator allows us to represent this kind of ladder-of-alternatives as a list. It returns the result of the first parser to succeed: -- file: ch16/JSONParsec.hs p_value_choice = value <* spaces where value = choice [ JString <$> p_string , JNumber <$> p_number , JObject <$> p_object , JArray <$> p_array , JBool <$> p_bool , JNull <$ string "null" ] <?> "JSON value" This leads us to the two most interesting parsers, for numbers and strings. We’ll deal with numbers first, since they’re simpler: -- file: ch16/JSONParsec.hs p_number :: CharParser () Double p_number = do s <- getInput case readSigned readFloat s of [(n, s')] -> n <$ setInput s' _ -> empty Our trick here is to take advantage of Haskell’s standard number parsing library functions, which are defined in the Numeric module.


pages: 273 words: 72,024

Bitcoin for the Befuddled by Conrad Barski

Airbnb, AltaVista, altcoin, bitcoin, blockchain, buttonwood tree, cryptocurrency, Debian, en.wikipedia.org, Ethereum, ethereum blockchain, fiat currency, Isaac Newton, MITM: man-in-the-middle, money: store of value / unit of account / medium of exchange, Network effects, node package manager, p-value, peer-to-peer, price discovery process, QR code, Satoshi Nakamoto, self-driving car, SETI@home, software as a service, the payments system, Yogi Berra

Let’s first choose the same elliptic curve that Bitcoin uses, which is called a Koblitz curve (Figure 7-7), using the parameters a = 0 and b = 7. Figure 7-7: A Koblitz curve We then choose a prime modulo p so that the elliptic curve satisfies this equation: y2 = x2 + ax + b(modp) NOTE In this type of math notation, the modulo operation is performed after the additions so first you calculate x2 + ax + b and then you perform mod p on the result. Bitcoin uses a very large p value (specifically p = 2256 − 232 − 29 − 28 − 27 − 26 − 24 − 1), which is important for cryptographic strength, but we can use a smaller number to illustrate how “driving around on integer-valued points on a Koblitz curve” works. Let’s choose p = 67. In fact, many curves satisfy the modular equation (namely, every curve where p is added to or subtracted from the b parameter any number of times; see the left-hand chart in Figure 7-8), and from those curves we can use all of the points that have integer-valued coordinates (shown in Figure 7-8 as dots).

Q = d × G = 13 × (5,47) = (7,22) (see Figure 7-10) * A clever way to generate a seemingly random but memorable private key is by coming up with a passphrase (i.e., Crowley and Satoshi sitting in a tree) and feeding it into a cryptographic hash function, which outputs an integer. This is called using a brainwallet. Because there are just slightly fewer than 2256 points on the curve Bitcoin uses (because the p value is much higher than the one we are using), brainwallets can use the SHA256 hash function (due to its 256-bit output). Figure 7-10: Here are the 13 points we “drive through” as we point multiply to create a digital signature. Now let’s look at how we sign messages with our private and public keys (or Bitcoin transactions): The receiver of our message will need all the values we have calculated so far except the private key, namely p, a, b, G, n, and Q, in order to verify that the signature is valid.


pages: 416 words: 39,022

Asset and Risk Management: Risk Oriented Finance by Louis Esch, Robert Kieffer, Thierry Lopez

asset allocation, Brownian motion, business continuity plan, business process, capital asset pricing model, computer age, corporate governance, discrete time, diversified portfolio, fixed income, implied volatility, index fund, interest rate derivative, iterative process, P = NP, p-value, random walk, risk/return, shareholder value, statistical model, stochastic process, transaction costs, value at risk, Wiener process, yield curve, zero-coupon bond

In the same way, the parameter VaR ∗ is calculated simply, for a normal distribution, VaR q ∗ = −zq · σ (pt ). The values of zq are found in the normal distribution tables.7 A few examples of these values are given in Table 6.2. This shows that the expression Table 6.2 Normal distribution quantiles q 0.500 0.600 0.700 0.800 0.850 0.900 0.950 0.960 0.970 0.975 0.980 0.985 0.990 0.995 6 7 zq 0.0000 0.2533 0.5244 0.8416 1.0364 1.2816 1.6449 1.7507 1.8808 1.9600 2.0537 2.1701 2.3263 2.5758 Jorion P., Value At Risk, McGraw-Hill, 2001. Pearson E. S. and Hartley H. O., Biometrika Tables for Statisticians, Biometrika Trust, 1976, p. 118. Theory of VaR 189 Example If a security gives an average profit of 100 over the reference period with a standard deviation of 80, we have E(pt ) = 100 and σ (pt ) = 80, which allows us to write: VaR 0.95 = 100 − (1.6449 × 80) = −31.6 VaR 0.975 = 100 − (1.9600 × 80) = −56.8 VaR 0.99 = 100 − (2.3263 × 80) = −86.1 The loss incurred by this security will only therefore exceed 31.6 (56.8 and 86.1 respectively) five times (2.5 times and once respectively) in 100 times.

Factor 3 Systematic risk of the portfolio Variable A Variable C Variable B Factor 2 Variable D Factor 1 Figure 11.5 Independent allocation Institutional Management: APT Applied to Investment Funds 289 APT – factor 3 Systematic risk of the portfolio Growth Not explained Value APT – factor 2 APT – factor 1 Figure 11.6 Joint allocation 11.4.2 Joint allocation: ‘value’ and ‘growth’ example As the systematic risk of the portfolio is expressed by its APT factor-sensitivity vector, it can be broken down into the explicative variables ‘growth’ and ‘value’, representing the S&P Value and the S&P Growth (Figure 11.6). One cannot, however, be content with projecting the portfolio risk vector onto each of the variables. In fact, the ‘growth’ and ‘value’ variables are not necessarily independent statistically. They cannot therefore be represented by geometrically orthogonal variables. It is in fact essential to project the portfolio risk vector perpendicularly onto the space of the vectors of the variables.

., Mathematics of Physics and Modern Engineering, McGrawHill, 1966. CHAPTER 6 Blattberg R. and Gonedes N., A comparison of stable and Student descriptions as statistical models for stock prices, Journal of Business, Vol. 47, 1974, pp. 244–80. Fama E., Behaviour of stock market prices, Journal of Business, Vol. 38, 1965, pp. 34–105. Johnson N. L. and Kotz S., Continuous Univariate Distribution, John Wiley & Sons, Inc, 1970. Jorion P., Value at Risk, McGraw-Hill, 2001. Pearson E. S. and Hartley H. O., Biometrika Tables for Students, Biometrika Trust, 1976. CHAPTER 7 Abramowitz M. and Stegun A., Handbook of Mathematical Functions, Dover, 1972. Chase Manhattan Bank NA, The Management of Financial Price Risk, Chase Manhattan Bank NA, 1995. 386 Bibliography Chase Manhattan Bank NA, Value at Risk, its Measurement and Uses, Chase Manhattan Bank NA, undated.


pages: 360 words: 85,321

The Perfect Bet: How Science and Math Are Taking the Luck Out of Gambling by Adam Kucharski

Ada Lovelace, Albert Einstein, Antoine Gombaud: Chevalier de Méré, beat the dealer, Benoit Mandelbrot, butterfly effect, call centre, Chance favours the prepared mind, Claude Shannon: information theory, collateralized debt obligation, correlation does not imply causation, diversification, Edward Lorenz: Chaos theory, Edward Thorp, Everything should be made as simple as possible, Flash crash, Gerolamo Cardano, Henri Poincaré, Hibernia Atlantic: Project Express, if you build it, they will come, invention of the telegraph, Isaac Newton, Johannes Kepler, John Nash: game theory, John von Neumann, locking in a profit, Louis Pasteur, Nash equilibrium, Norbert Wiener, p-value, performance metric, Pierre-Simon Laplace, probability theory / Blaise Pascal / Pierre de Fermat, quantitative trading / quantitative finance, random walk, Richard Feynman, Ronald Reagan, Rubik’s Cube, statistical model, The Design of Experiments, Watson beat the top human players on Jeopardy!, zero-sum game

When Pearson compared the length of runs of different colors with the frequencies that he’d expect if the wheels were random, something looked wrong. Runs of two or three of the same color were scarcer than they should have been. And runs of a single color—say, a black sandwiched between two reds—were far too common. Pearson calculated the probability of observing an outcome at least as extreme as this one, assuming that the roulette wheel was truly random. This probability, which he dubbed the p value, was tiny. So small, in fact, that Pearson said that even if he’d been watching the Monte Carlo tables since the start of Earth’s history, he would not have expected to see a result that extreme. He believed it was conclusive evidence that roulette was not a game of chance. The discovery infuriated him. He’d hoped that roulette wheels would be a good source of random data and was angry that his giant casino-shaped laboratory was generating unreliable results.

As the ball traveled around the rim a dozen or so times, he gathered enough information to make predictions about where it would land. He only had time to run the experiment twenty-two times before he had to leave the office. Out of these attempts, he predicted the correct number three times. Had he just been making random guesses, the probability he would have got at least this many right (the p value) was less than 2 percent. This persuaded him that the Eudaemons’ strategy worked. It seemed that roulette really could be beaten with physics. Having tested the method by hand, Small and Tse set up a high-speed camera to collect more precise measurements about the ball’s position. The camera took photos of the wheel at a rate of about ninety frames per second. This made it possible to explore what happened after the ball hit a deflector.


pages: 288 words: 81,253

Thinking in Bets by Annie Duke

banking crisis, Bernie Madoff, Cass Sunstein, cognitive bias, cognitive dissonance, Daniel Kahneman / Amos Tversky, delayed gratification, Donald Trump, en.wikipedia.org, endowment effect, Estimating the Reproducibility of Psychological Science, Filter Bubble, hindsight bias, Jean Tirole, John Nash: game theory, John von Neumann, loss aversion, market design, mutually assured destruction, Nate Silver, p-value, phenotype, prediction markets, Richard Feynman, ride hailing / ride sharing, Stanford marshmallow experiment, Stephen Hawking, Steven Pinker, the scientific method, The Signal and the Noise by Nate Silver, urban planning, Walter Mischel, Yogi Berra, zero-sum game

Expressing the belief as uncertain signals to our listeners that the belief needs further vetting, that step three is still in progress. When scientists publish results of experiments, they share with the rest of their community their methods of gathering and analyzing the data, the data itself, and their confidence in that data. That makes it possible for others to assess the quality of the information being presented, systematized through peer review before publication. Confidence in the results is expressed through both p-values, the probability one would expect to get the result that was actually observed (akin to declaring your confidence on a scale of zero to ten), and confidence intervals (akin to declaring ranges of plausible alternatives). Scientists, by institutionalizing the expression of uncertainty, invite their community to share relevant information and to test and challenge the results and explanations. The information that gets shared back might confirm, disconfirm, or refine published hypotheses.

., 20–23, 80, 244n decisions in, 116, 167, 179, 180, 188, 196–98 diversity of opinions and, 139 learning and, 78 long hours of playing, 188–89 loss limits in, 136–37, 187 napkin list of hands in, 101–2, 161–62 possible futures and, 211 scoreboard in, 196 seminars on, 167 six and seven of diamonds in, 53, 59–60, 121 strategic plans and long view in, 179, 180, 200 strategy group for, 124, 126–27, 131, 133–34, 136–37, 155, 167, 174 suited connectors in, 53–54 Texas Hold’em, 53 tilt in, 197–98 time constraints in, 179 tournaments, 241n watching in, 97 workshopping in, 158–59 political beliefs, 63–64, 141–45, 162–63, 205 social psychologists and, 145–47 Pollan, Michael, 85 pollsters, 32, 230–31, 245n Poundstone, William, 19, 246n Powell, Justice, 143 Power of Habit, The (Duhigg), 106–7 Pratt, Spencer, 119–20 precommitments (Ulysses contracts), 200–203, 212, 221 decision swear jar, 204–7 Predictably Irrational (Ariely), 89n prediction markets, 149–50 premortems, 221–26 president-firing decision, 8–11, 33, 43, 48, 158, 229–30 presidential election of 2016, 32–33, 61n, 230–31, 245n Princess Bride, The, 23–26, 244n Princeton Alumni Weekly, 57 Princeton-Dartmouth football game, 56–59 Prisoner’s Dilemma (Poundstone), 19, 246n privacy, 157 Prospect Theory, 36 Prudential Retirement, 185 psychology, 145–47, 149 Pulitzer, Joseph, 60 p-values, 72 Rashomon, 157 Rashomon Effect, 157–58 rationality and irrationality, 11, 43, 51, 64, 181n, 183, 204 Ulysses contracts and, 201, 203 words, phrases, and thoughts that signal irrationality, 204–7 rats, 87 reconnaissance, 207–12, 218 red teams, 140, 170–71 Reese, Chip, 244n reflexive mind, 12–14, 16, 181n regret, 186–89, 212, 225, 230 Rehnquist, Justice, 143 Reiner, Rob, 244n relationships, 195, 196, 199, 223 relocating, 38–43, 45, 46 Reproducibility Project: Psychology, 149–50 resulting, 7–11, 26, 166 Rethinking Positive Thinking: Inside the New Science of Motivation (Oettingen), 223 retirement, 182, 184–86, 203 Righteous Mind, The: Why Good People Are Divided by Politics and Religion (Haidt), 129–30 risk, 20, 34, 39, 42–44, 46–47, 66, 111, 179 Roberts, Justice, 143 Russo, J.


pages: 506 words: 152,049

The Extended Phenotype: The Long Reach of the Gene by Richard Dawkins

Alfred Russel Wallace, assortative mating, Douglas Hofstadter, Drosophila, epigenetics, Gödel, Escher, Bach, impulse control, Menlo Park, Necker cube, p-value, phenotype, quantitative trading / quantitative finance, selection bias, stem cell

There is a whole family of ‘mixed strategies’ of the form ‘Dig with probability p, enter with probability 1 – p’, and only one of these is the ESS. I said that the two extremes were joined by a continuum. I meant that the stable population frequency of digging, p* (70 per cent or whatever it is), could be achieved by any of a large number of combinations of pure and mixed individual strategies. There might be a wide distribution of p values in individual nervous systems in the population, including some pure diggers and pure enterers. But, provided the total frequency of digging in the population is equal to the critical value p*, it would still be true that digging and entering were equally successful, and natural selection would not act to change the relative frequency of the two subroutines in the next generation. The population would be in an evolutionarily stable state.

Classify all individuals into those that entered with a probability less than 0.1, those that entered with a probability between 0.1 and 0.2, those with a probability between 0.2 and 0.3, between 0.3 and 0.4, 0.4 and 0.5, etc. Then compare the lifetime reproductive successes of wasps in the different classes. But supposing we did this, exactly what would the ESS theory predict? A hasty first thought is that those wasps with a p value close to the equilibrium p* should enjoy a higher success score than wasps with some other value of p: the graph of success against p should peak at an ‘optimum’ at p*. But p* is not really an optimum value, it is an evolutionarily stable value. The theory expects that, when p* is achieved in the population as a whole, digging and entering should be equally successful. At equilibrium, therefore, we expect no correlation between a wasp’s digging probability and her success.

The theory gives us no particular reason to expect that there should be any such variation. Indeed, the analogy with sex ratio theory just mentioned gives positive grounds for expecting that wasps should not vary in digging probability. In accordance with this, a statistical test on the actual data revealed no evidence of inter-individual variation in digging tendency. Even if there were some individual variation, the method of comparing the success of individuals with different p values would have been a crude and insensitive one for comparing the success rates of digging and entering. This can be seen by an analogy. An agriculturalist wishes to compare the efficacy of two fertilizers, A and B. He takes ten fields and divides each of them into a large number of small plots. Each plot is treated, at random, with either A or B, and wheat is sown in all the plots of all the fields.


pages: 447 words: 104,258

Mathematics of the Financial Markets: Financial Instruments and Derivatives Modelling, Valuation and Risk Issues by Alain Ruttiens

algorithmic trading, asset allocation, asset-backed security, backtesting, banking crisis, Black Swan, Black-Scholes formula, Brownian motion, capital asset pricing model, collateralized debt obligation, correlation coefficient, Credit Default Swap, credit default swaps / collateralized debt obligations, delta neutral, discounted cash flows, discrete time, diversification, fixed income, implied volatility, interest rate derivative, interest rate swap, margin call, market microstructure, martingale, p-value, passive investing, quantitative trading / quantitative finance, random walk, risk/return, Satyajit Das, Sharpe ratio, short selling, statistical model, stochastic process, stochastic volatility, time value of money, transaction costs, value at risk, volatility smile, Wiener process, yield curve, zero-coupon bond

In particular: at A: the portfolio is 100% invested in the risk-free rate; at B: 100% investment in an efficient portfolio of stocks; between A and B: mixed portfolio, invested at x% in the risk-free rate and (1 − x)% in the efficient portfolio of stocks; beyond B: leveraged portfolio, assuming the investor has borrowed money (at the rf rate) and has then invested >100% of his available resources in an efficient portfolio. For a given investor, characterized by some utility function U, representing his well-being, assuming his wealth as a portfolio P, if the portfolio return were certain (i.e., deterministic), we would have but, more realistically (even if simplified, in the spirit of this theory), if the portfolio P value is normally distributed in returns, with some rP and σP, where f is some function, often considered as a quadratic curve.4 So that, given the property of the CML (i.e., tangent to the efficient frontier), and some U = f(P) curve, the optimal portfolio must be located at the tangent of U to CML, determining the adequate proportion between B and risk-free instrument. To illustrate this, let us compare the case of two investors, Investor #1, with utility function U1, being more risk averse than Investor #2, with utility function U2 (see Figure 4.10).

Practically speaking, the number of previous terms of the series (here, arbitrarily, 18 terms) should have to be optimized, and the parameter a updated for the successive forecasts. Moreover, if the data present irregularities in their succession (changes of trends, mean reversion, etc.), the AR process is unable to incorporate such phenomena and works poorly. The generalized form of the previous case, in order to forecast rt as a function of more than its previous observed value, can be represented as follows: This is called an AR(p) process, involving the previous p values of the series. There is no rule for determining p, provided it is not excessive (by application of the “parcimony principle”). The above relationship looks like a linear regression, but instead of regressing according to a series of independent variables, this regression uses previous values of the dependent variable itself, hence the “autoregression” name. 9.2 THE MOVING AVERAGE (MA) PROCESS Let us consider a series of returns consisting in pure so-called “random numbers” {t}, i.i.d., generally distributed following a normal distribution.


Social Capital and Civil Society by Francis Fukuyama

Berlin Wall, blue-collar work, Fall of the Berlin Wall, feminist movement, Francis Fukuyama: the end of history, George Akerlof, German hyperinflation, Jane Jacobs, Joseph Schumpeter, Kevin Kelly, labor-force participation, low skilled workers, p-value, Pareto efficiency, postindustrial economy, principal–agent problem, RAND corporation, Silicon Valley, The Death and Life of Great American Cities, transaction costs, World Values Survey

However, it is possible for a group to have an r p coefficient larger than 1. To take the earlier example of the religious sect that encourages honesty and reliability, if these traits are demanded of its members not just in their dealings with other members of the sect but generally in their dealings with other people, then there will be a positive spillover effect into the larger society. Again, Weber argued in effect that sectarian Puritans had an r p value greater than 1. The final factor affecting a society’s supply of social capital concerns not the internal cohesiveness of groups, but rather the way in which they relate to outsiders. Strong moral bonds within a group in some cases may actually serve to decrease the degree to which members of that group are able to trust outsiders and work effectively with them. A highly disciplined, well-organized group sharing strong common values may be capable of highly coordinated collective action, and yet may nonetheless be a social liability.


pages: 407 words: 104,622

The Man Who Solved the Market: How Jim Simons Launched the Quant Revolution by Gregory Zuckerman

affirmative action, Affordable Care Act / Obamacare, Albert Einstein, Andrew Wiles, automated trading system, backtesting, Bayesian statistics, beat the dealer, Benoit Mandelbrot, Berlin Wall, Bernie Madoff, blockchain, Brownian motion, butter production in bangladesh, buy and hold, buy low sell high, Claude Shannon: information theory, computer age, computerized trading, Credit Default Swap, Daniel Kahneman / Amos Tversky, diversified portfolio, Donald Trump, Edward Thorp, Elon Musk, Emanuel Derman, endowment effect, Flash crash, George Gilder, Gordon Gekko, illegal immigration, index card, index fund, Isaac Newton, John Meriwether, John Nash: game theory, John von Neumann, Loma Prieta earthquake, Long Term Capital Management, loss aversion, Louis Bachelier, mandelbrot fractal, margin call, Mark Zuckerberg, More Guns, Less Crime, Myron Scholes, Naomi Klein, natural language processing, obamacare, p-value, pattern recognition, Peter Thiel, Ponzi scheme, prediction markets, quantitative hedge fund, quantitative trading / quantitative finance, random walk, Renaissance Technologies, Richard Thaler, Robert Mercer, Ronald Reagan, self-driving car, Sharpe ratio, Silicon Valley, sovereign wealth fund, speech recognition, statistical arbitrage, statistical model, Steve Jobs, stochastic process, the scientific method, Thomas Bayes, transaction costs, Turing machine

Laufer’s work also showed that, if markets moved higher late in a day, it often paid to buy futures contracts just before the close of trading and dump them at the market’s opening the next day. The team uncovered predictive effects related to volatility, as well as a series of combination effects, such as the propensity of pairs of investments—such as gold and silver, or heating oil and crude oil—to move in the same direction at certain times in the trading day compared with others. It wasn’t immediately obvious why some of the new trading signals worked, but as long as they had p-values, or probability values, under 0.01—meaning they appeared statistically significant, with a low probability of being statistical mirages—they were added to the system. Wielding an array of profitable investing ideas wasn’t nearly enough, Simons soon realized. “How do we pull the trigger?” he asked Laufer and the rest of the team. Simons was challenging them to solve yet another vexing problem: Given the range of possible trades they had developed and the limited amount of money that Medallion managed, how much should they bet on each trade?

.), 207 Morgan, Howard, 56 Morgan Stanley, 129–33, 157, 166, 211, 256 Moscow State University, 236 moving averages, 73 Muller, Peter, 256, 299 multidimensional anomalies, 273 Murdoch, Rupert, xvii Murphy, John, 96 Musk, Elon, xvii mutual funds, 161–64, 172, 309–10 My Life as a Quant (Derman), 126 NASA, 93 Nasar, Sylvia, 90 Nasdaq’s dot-com crash, 215–17, 257–58 Nash, John, 89–90 National Museum of Mathematics, 262 National Rifle Association (NRA), 275 National Security Agency (NSA), 23–24, 31, 208 National Youth Science Camp, 170 Nepal, 239, 240 Neuwirth, Lee, 25, 26, 30–31, 46 Newman, Paul, 128 news flashes, 221–22 Newton, Isaac, 27 Newton High School, 13 New York City Fire Department, 168 New York Mercantile Exchange, 58 New York Stock Exchange, 211, 212 New York Times, 31–32, 76, 99, 126, 172, 281, 282, 293 Nick Simons Institute, 240 Nobel Prize, 33, 152, 209 noncompete agreements, 133, 201, 238, 241, 252–53 nondisclosure agreements, xv–xvi, 133, 201, 238, 241, 252–53 nonrandom trading effects, 143–44 Norris, Floyd, 126 Nova Fund, 167, 188–89 number theory, 34, 69–70 Obama, Barack, 276 Ohio State University, 275 Olsen, Greg, 79–80, 96–97 One Up on Wall Street (Lynch), 163 “On the Transitivity of Holonomy Systems” (Simons), 20 Open Marriage (O’Neill), 36 origins of the universe, xviii, 287, 323–26, 350 OSHA (Occupational Safety and Health Administration), 234 Oswald Veblen Prize, 38 Owl’s Nest, 228, 275, 288–89, 295 Pacific Investment Management Company (PIMCO), 163–64, 309 PaineWebber, 155–56 pairs trade, 129–30, 272 Paloma Partners, 138 partial differential equations (PDEs), 21, 26–28 pattern analysis, 5, 24, 45, 57, 123–24 Patterson, Nick background of, 147–48 at IDA, 148 Patterson, Nick, at Renaissance, xv, 145–50, 202 Brown and Mercer, 169, 179–80, 231 departure, 238 LTCM collapse and, 212–13 recruitment of, 168–69 tech bubble, 215–17 trading models, 149–50, 153, 193, 198 Paulson, John, 263–64, 309 PDT Partners, 258, 299 peer pressure, 200 Peled, Abe, 178 Pellegrini, Paolo, 263–64 Penavic, Kresimir, 145, 153 Pence, Mike, 285 Pepsi, 129–30, 272 Perl, 155 “Piggy Basket,” 57–59 Plateau, Joseph, 27 points, 190 poker, 15, 18, 25, 29, 69, 94, 127, 163 polynomials, 93 pool operator, 86 portfolio insurance, 126 portfolio theory, 30, 92 presidential election of 2016, xviii, 279–91, 294–95, 302 presidential election of 2020, 304–5 primal therapy, 36–37 Primerica, 123 Princeton/Newport Partners, 128 Princeton University, 28, 31, 37, 82, 141 Priorities USA, 283 “Probabilistic Models for and Prediction of Stock Market Behavior” (Simons), 28–30 Procter & Gamble, 132 programming language, 155, 191–92, 233–34 p-values, 144 Qatar, 261–62 quantitative trading, 30, 39, 61, 124, 126–27, 211–12, 256, 308–15 quants, xvii, 126–27, 199, 204, 256 Quantum Fund, 164–65, 333 racism, 13–14, 278, 294, 295–96, 303 Rand, Ayn, 277 Reagan, Ronald, 65, 105 Recession of 1969–1970, 123 regression line, 83–84 Reichardt, Louis, 323 Renaissance Institutional Diversified Alpha Fund, 319 Renaissance Institutional Diversified Global Equity Fund, 319 Renaissance Institutional Equities Fund (RIEF), 246–52, 254, 255, 257–61, 264–65, 271, 284, 300, 316, 319 Renaissance Institutional Futures Fund (RIFF), 252, 265, 271 Renaissance Riviera, 227–28 Renaissance Technologies Corporation Ax and Straus establish Axcom, 78–83 Ax joins, 51–52 Ax’s departure, 102–3 Baum joins, 45–46, 49 Baum’s departure, 63–64 Berlekamp’s departure, 117–18 Brown and Mercer join, 169, 179–80 compensation, 200–201, 227, 228–29, 233 expansion into stock investing, 157–58 financial crisis of 2007–2008, 255–62, 263–64 GAM Investments, 153–54 headquarters, 186, 205 hiring and interview process, 202–3, 233 Laufer joins, 109, 141–44 Mercer and political blowback, 291–305 Mercer steps down as co-CEO, 301–2, 319 name change to, 61 nondisclosure agreements, xv–xvi, 133, 201, 238, 241, 252–53 Straus’s departure, 158 tax avoidance investigation of 2014, 226–27 “the Sheiks,” 156–57 timeline of key events, xii trading models, 138–40, 156–57, 161, 203–5, 212–13, 221–22, 272–74 Volfbeyn and Belopolsky, 238, 241, 242, 252–54 Reserve Primary Fund, 172–73 Resnik, Phil, 176 retracements, 203–4 reversion trading strategy, 95–96 Revolution Books, 133–34 Riemann hypothesis, 65 Rival, Anita, 140 Robertson, Julian, 217 Robert Wood Johnson Foundation, 249–50 Robinson, Arthur, 231, 276 Rockefeller, Nelson, 33, 71 rocket scientists, 126 Romney, Mitt, 279, 290 Rosenberg, Barr, 127 Rosenfeld, Eric, 209 Rosenshein, Joe, 16–17, 41 Rosinsky, Jacqueline, 168 Royal Bank of Bermuda, 51 Rubio, Marco, 279 Russian cryptography, 23–26, 46–49, 148 Russian financial crisis of 1998, 210 St.


pages: 936 words: 85,745

Programming Ruby 1.9: The Pragmatic Programmer's Guide by Dave Thomas, Chad Fowler, Andy Hunt

book scanning, David Heinemeier Hansson, Debian, domain-specific language, Jacquard loom, Kickstarter, p-value, revision control, Ruby on Rails, slashdot, sorting algorithm, web application

Fixnum values are stored as 31-bit numbers1 that are formed by shifting the original number left 1 bit and then setting the LSB, or least significant bit (bit 0), to 1. When VALUE is used as a pointer to a specific Ruby structure, it is guaranteed always to have an LSB of zero; the other immediate values also have LSBs of zero. Thus, a simple bit test can tell you whether you have a Fixnum. This test is wrapped in a macro, FIXNUM_P. Similar tests let you check for other immediate values. FIXNUM_P(value) SYMBOL_P(value) NIL_P(value) RTEST(value) → → → → nonzero nonzero nonzero nonzero if if if if value value value value is is is is a Fixnum a Symbol nil neither nil nor false Several useful conversion macros for numbers as well as other standard data types are shown in Table 29.1 on the next page. The other immediate values (true, false, and nil) are represented in C as the constants Qtrue, Qfalse, and Qnil, respectively.

This is all part of Ruby duck typing conventions, described in more detail on pages 853 and 370. The StringValue method checks to see whether its operand is a String. If not, it tries to invoke to_str on the object, throwing a TypeError exception if it can’t. So, if you want to write some code that iterates over all the characters in a String object, you may write the following: Download samples/extruby_5.rb static VALUE iterate_over(VALUE original_str) { int i; char *p; VALUE str = StringValue(original_str); p = RSTRING_PTR(str); // may be null for (i = 0; i < RSTRING_LEN(str); i++, p++) { // process *p } return str; } Report erratum RUBY O BJECTS IN C 839 If you want to bypass the length and just access the underlying string pointer, you can use the convenience method StringValuePtr, which both resolves the string reference and then returns the C pointer to the contents.


pages: 436 words: 123,488

Overdosed America: The Broken Promise of American Medicine by John Abramson

germ theory of disease, Louis Pasteur, medical malpractice, medical residency, meta analysis, meta-analysis, p-value, placebo effect, profit maximization, profit motive, publication bias, RAND corporation, randomized controlled trial, selective serotonin reuptake inhibitor (SSRI), stem cell, Thomas Kuhn: the structure of scientific revolutions

But then, rather than addressing these serious complications, the authors dismissed them with a most unusual statement: “The difference in major cardiovascular events in the VIGOR trial [of Vioxx] may reflect the play of chance” (italics mine) because “the number of cardiovascular events was small (less than 70).” The comment that a statistically significant finding “may reflect the play of chance” struck me as very odd. Surely the experts who wrote the review article knew that the whole purpose of doing statistics is to determine the degree of probability and the role of chance. Anyone who has taken Statistics 101 knows that p values of .05 or less (p < .05) are considered statistically significant. In this case it means that if the VIGOR study were repeated 100 times, more than 95 of those trials would show that the people who took Vioxx had at least twice as many heart attacks, strokes, and death from any cardiovascular event than the people who took naproxen. And in more than 99 out of those 100 studies, the people who took Vioxx would have at least four times as many heart attacks as the people who took naproxen.

Box 1 Auckland, New Zealand http://www.harpercollins.co.nz United Kingdom HarperCollins Publishers Ltd. 77-85 Fulham Palace Road London, W6 8JB, UK http://www.harpercollins.co.uk United States HarperCollins Publishers Inc. 10 East 53rd Street New York, NY 10022 http://www.harpercollins.com FOOTNOTE *The standard way to determine whether a treatment has a significant effect is to calculate the probability that the observed difference in outcome (improvement or side effect) between the patients in the group that received the new treatment and the group that received the old treatment (or placebo) would have happened by chance if, in fact, the treatment really had no effect whatsoever. The conventional cutoff for determining statistical significance is a probability (p) of the observed difference between the groups occurring purely by chance less than 5 times out of 100 trials, or p < .05. This translates to: “the probability that this difference will occur at random is less than 5 chances in 100 trials.” The smaller the p value, the less likely it is that the difference between the groups happened by chance, and therefore the stronger—i.e., the more statistically significant—the finding. *The blood levels of all three kinds of cholesterol (total, LDL, and HDL) are expressed as “mg/dL,” meaning the number of milligrams of cholesterol present in one-tenth of a liter of serum (the clear liquid that remains after the cells have been removed from a blood sample).


pages: 303 words: 67,891

Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms: Proceedings of the Agi Workshop 2006 by Ben Goertzel, Pei Wang

AI winter, artificial general intelligence, bioinformatics, brain emulation, combinatorial explosion, complexity theory, computer vision, conceptual framework, correlation coefficient, epigenetics, friendly AI, G4S, information retrieval, Isaac Newton, John Conway, Loebner Prize, Menlo Park, natural language processing, Occam's razor, p-value, pattern recognition, performance metric, Ray Kurzweil, Rodney Brooks, semantic web, statistical model, strong AI, theory of mind, traveling salesman, Turing machine, Turing test, Von Neumann architecture, Y2K

Semantic similarities within and across columns of the table seem to be at the same level of strength; however, an objective measure would be necessary to quantify this impression. How can we estimate the statistical significance of cooccurrence of the same words in top portions of two lists in each row of Table 2? Here is one easy way to estimate p-values from above. Given the size of the English core, and assuming that each French-to-English translation is a “blind shot” into the English core (null-hypothesis), we can estimate the probability to find one and the same word in top-twelve portions of both lists: p ~ 2*12*12 / 8,236 = 0.035 (we included the factor 2, because there are two possible ways of aligning the lists with respect to each other4). Therefore, the p-value of the case of word repetition that we see in Table 2 is smaller than 0.035, at least. In conclusion, we have found significant correlations among sorted lists across languages for each of the three PCs.


Meghnad Desai Marxian economic theory by Unknown

business cycle, commoditize, Corn Laws, full employment, land reform, means of production, p-value, price mechanism, profit motive

C' Commodity, usually output commodity; also called commodity capital. L Labour; labour power as sold by the labourer and labour as expended during production. MP Materials of production. L and MP together comprise C, which is the same as P Productive capital. c The difference between C' and C. C Constant capital. V Variable capital. S Surplus value. r = SIV Rate of surplus value. g - = CIC+V Organic composition of capital. P (Value) rate of profit. P Rate of profit (ambiguous as to whether money or value). p OMoney) rate of profit. YI The value of output of Department Y2 The value of output of Department II Y Total value of output PI Price of the commodity produced by Department I. P2 Price of the commodity produced by Department II. P3 Price of the conunodi ty produced by Department Ill. R Total Profit iii In general subscript i stands for the ith Department. hence Cl is the true value of constant capital used in Department I.


Quantitative Trading: How to Build Your Own Algorithmic Trading Business by Ernie Chan

algorithmic trading, asset allocation, automated trading system, backtesting, Black Swan, Brownian motion, business continuity plan, buy and hold, compound rate of return, Edward Thorp, Elliott wave, endowment effect, fixed income, general-purpose programming language, index fund, John Markoff, Long Term Capital Management, loss aversion, p-value, paper trading, price discovery process, quantitative hedge fund, quantitative trading / quantitative finance, random walk, Ray Kurzweil, Renaissance Technologies, risk-adjusted returns, Sharpe ratio, short selling, statistical arbitrage, statistical model, survivorship bias, systematic trading, transaction costs

The following code fragment, however, tests for correlation between the two time series: % A test for correlation. dailyReturns=(adjcls-lag1(adjcls))./lag1(adjcls); [R,P]=corrcoef(dailyReturns(2:end,:)); % R = % % 1.0000 % 0.4849 0.4849 1.0000 P1: JYS c07 JWBK321-Chan September 24, 2008 14:4 Printer: Yet to come Special Topics in Quantitative Trading % % % P = % % % 1 0 133 0 1 % The P value of 0 indicates that the two time series % are significantly correlated. Stationarity is not limited to the spread between stocks: it can also be found in certain currency rates. For example, the Canadian dollar/Australian dollar (CAD/AUD) cross-currency rate is quite stationary, both being commodities currencies. Numerous pairs of futures as well as well as fixed-income instruments can be found to be cointegrating as well.


pages: 1,331 words: 163,200

Hands-On Machine Learning With Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems by Aurélien Géron

Amazon Mechanical Turk, Anton Chekhov, combinatorial explosion, computer vision, constrained optimization, correlation coefficient, crowdsourcing, don't repeat yourself, Elon Musk, en.wikipedia.org, friendly AI, ImageNet competition, information retrieval, iterative process, John von Neumann, Kickstarter, natural language processing, Netflix Prize, NP-complete, optical character recognition, P = NP, p-value, pattern recognition, pull request, recommendation engine, self-driving car, sentiment analysis, SpamAssassin, speech recognition, stochastic process

Note Other algorithms work by first training the Decision Tree without restrictions, then pruning (deleting) unnecessary nodes. A node whose children are all leaf nodes is considered unnecessary if the purity improvement it provides is not statistically significant. Standard statistical tests, such as the χ2 test, are used to estimate the probability that the improvement is purely the result of chance (which is called the null hypothesis). If this probability, called the p-value, is higher than a given threshold (typically 5%, controlled by a hyperparameter), then the node is considered unnecessary and its children are deleted. The pruning continues until all unnecessary nodes have been pruned. Figure 6-3 shows two Decision Trees trained on the moons dataset (introduced in Chapter 5). On the left, the Decision Tree is trained with the default hyperparameters (i.e., no restrictions), and on the right the Decision Tree is trained with min_samples_leaf=4.

Pac-Man Using Deep Q-Learning min_after_dequeue, RandomShuffleQueue MNIST dataset, MNIST-MNIST model parallelism, Model Parallelism-Model Parallelism model parameters, Gradient Descent, Batch Gradient Descent, Early Stopping, Under the Hood, Quadratic Programming, Creating Your First Graph and Running It in a Session, Construction Phase, Training RNNsdefining, Model-based learning model selection, Model-based learning model zoos, Model Zoos model-based learning, Model-based learning-Model-based learning modelsanalyzing, Analyze the Best Models and Their Errors-Analyze the Best Models and Their Errors evaluating on test set, Evaluate Your System on the Test Set-Evaluate Your System on the Test Set moments, Adam Optimization Momentum optimization, Momentum optimization-Momentum optimization Monte Carlo tree search, Policy Gradients Multi-Layer Perceptrons (MLP), Introduction to Artificial Neural Networks, The Perceptron-Multi-Layer Perceptron and Backpropagation, Neural Network Policiestraining with TF.Learn, Training an MLP with TensorFlow’s High-Level API multiclass classifiers, Multiclass Classification-Multiclass Classification Multidimensional Scaling (MDS), Other Dimensionality Reduction Techniques multilabel classifiers, Multilabel Classification-Multilabel Classification Multinomial Logistic Regression (see Softmax Regression) multinomial(), Neural Network Policies multioutput classifiers, Multioutput Classification-Multioutput Classification MultiRNNCell, Distributing a Deep RNN Across Multiple GPUs multithreaded readers, Multithreaded readers using a Coordinator and a QueueRunner-Multithreaded readers using a Coordinator and a QueueRunner multivariate regression, Frame the Problem N naive Bayes classifiers, Multiclass Classification name scopes, Name Scopes natural language processing (NLP), Recurrent Neural Networks, Natural Language Processing-An Encoder–Decoder Network for Machine Translationencoder-decoder network for machine translation, An Encoder–Decoder Network for Machine Translation-An Encoder–Decoder Network for Machine Translation TensorFlow tutorials, Natural Language Processing, An Encoder–Decoder Network for Machine Translation word embeddings, Word Embeddings-Word Embeddings Nesterov Accelerated Gradient (NAG), Nesterov Accelerated Gradient-Nesterov Accelerated Gradient Nesterov momentum optimization, Nesterov Accelerated Gradient-Nesterov Accelerated Gradient network topology, Fine-Tuning Neural Network Hyperparameters neural network hyperparameters, Fine-Tuning Neural Network Hyperparameters-Activation Functionsactivation functions, Activation Functions neurons per hidden layer, Number of Neurons per Hidden Layer number of hidden layers, Number of Hidden Layers-Number of Hidden Layers neural network policies, Neural Network Policies-Neural Network Policies neuronsbiological, From Biological to Artificial Neurons-Biological Neurons logical computations with, Logical Computations with Neurons neuron_layer(), Construction Phase next_batch(), Execution Phase No Free Lunch theorem, Testing and Validating node edges, Visualizing the Graph and Training Curves Using TensorBoard nonlinear dimensionality reduction (NLDR), LLE(see also Kernel PCA; LLE (Locally Linear Embedding)) nonlinear SVM classification, Nonlinear SVM Classification-Computational Complexitycomputational complexity, Computational Complexity Gaussian RBF kernel, Gaussian RBF Kernel-Gaussian RBF Kernel with polynomial features, Nonlinear SVM Classification-Polynomial Kernel polynomial kernel, Polynomial Kernel-Polynomial Kernel similarity features, adding, Adding Similarity Features-Adding Similarity Features nonparametric models, Regularization Hyperparameters nonresponse bias, Nonrepresentative Training Data nonsaturating activation functions, Nonsaturating Activation Functions-Nonsaturating Activation Functions normal distribution (see Gaussian distribution) Normal Equation, The Normal Equation-Computational Complexity normalization, Feature Scaling normalized exponential, Softmax Regression norms, Select a Performance Measure notations, Select a Performance Measure-Select a Performance Measure NP-Complete problems, The CART Training Algorithm null hypothesis, Regularization Hyperparameters numerical differentiation, Numerical Differentiation NumPy, Create the Workspace NumPy arrays, Handling Text and Categorical Attributes NVidia Compute Capability, Installation nvidia-smi, Managing the GPU RAM n_components, Choosing the Right Number of Dimensions O observation space, Neural Network Policies off-policy algorithm, Temporal Difference Learning and Q-Learning offline learning, Batch learning one-hot encoding, Handling Text and Categorical Attributes one-versus-all (OvA) strategy, Multiclass Classification, Softmax Regression, Exercises one-versus-one (OvO) strategy, Multiclass Classification online learning, Online learning-Online learning online SVMs, Online SVMs-Online SVMs OpenAI Gym, Introduction to OpenAI Gym-Introduction to OpenAI Gym operation_timeout_in_ms, In-Graph Versus Between-Graph Replication Optical Character Recognition (OCR), The Machine Learning Landscape optimal state value, Markov Decision Processes optimizers, Faster Optimizers-Learning Rate SchedulingAdaGrad, AdaGrad-AdaGrad Adam optimization, Faster Optimizers, Adam Optimization-Adam Optimization Gradient Descent (see Gradient Descent optimizer) learning rate scheduling, Learning Rate Scheduling-Learning Rate Scheduling Momentum optimization, Momentum optimization-Momentum optimization Nesterov Accelerated Gradient (NAG), Nesterov Accelerated Gradient-Nesterov Accelerated Gradient RMSProp, RMSProp out-of-bag evaluation, Out-of-Bag Evaluation-Out-of-Bag Evaluation out-of-core learning, Online learning out-of-memory (OOM) errors, Static Unrolling Through Time out-of-sample error, Testing and Validating OutOfRangeError, Reading the training data directly from the graph, Multithreaded readers using a Coordinator and a QueueRunner output gate, LSTM Cell output layer, Multi-Layer Perceptron and Backpropagation OutputProjectionWrapper, Training to Predict Time Series-Training to Predict Time Series output_put_keep_prob, Applying Dropout overcomplete autoencoder, Unsupervised Pretraining Using Stacked Autoencoders overfitting, Overfitting the Training Data-Overfitting the Training Data, Create a Test Set, Soft Margin Classification, Gaussian RBF Kernel, Regularization Hyperparameters, Regression, Number of Neurons per Hidden Layeravoiding through regularization, Avoiding Overfitting Through Regularization-Data Augmentation P p-value, Regularization Hyperparameters PaddingFIFOQueue, PaddingFifoQueue Pandas, Create the Workspace, Download the Datascatter_matrix, Looking for Correlations-Looking for Correlations parallel distributed computing, Distributing TensorFlow Across Devices and Servers-Exercisesdata parallelism, Data Parallelism-TensorFlow implementation in-graph versus between-graph replication, In-Graph Versus Between-Graph Replication-Model Parallelism model parallelism, Model Parallelism-Model Parallelism multiple devices across multiple servers, Multiple Devices Across Multiple Servers-Other convenience functionsasynchronous communication using queues, Asynchronous Communication Using TensorFlow Queues-PaddingFifoQueue loading training data, Loading Data Directly from the Graph-Other convenience functions master and worker services, The Master and Worker Services opening a session, Opening a Session pinning operations across tasks, Pinning Operations Across Tasks sharding variables, Sharding Variables Across Multiple Parameter Servers sharing state across sessions, Sharing State Across Sessions Using Resource Containers-Sharing State Across Sessions Using Resource Containers multiple devices on a single machine, Multiple Devices on a Single Machine-Control Dependenciescontrol dependencies, Control Dependencies installation, Installation-Installation managing the GPU RAM, Managing the GPU RAM-Managing the GPU RAM parallel execution, Parallel Execution-Parallel Execution placing operations on devices, Placing Operations on Devices-Soft placement one neural network per device, One Neural Network per Device-One Neural Network per Device parameter efficiency, Number of Hidden Layers parameter matrix, Softmax Regression parameter server (ps), Multiple Devices Across Multiple Servers parameter space, Gradient Descent parameter vector, Linear Regression, Gradient Descent, Training and Cost Function, Softmax Regression parametric models, Regularization Hyperparameters partial derivative, Batch Gradient Descent partial_fit(), Incremental PCA Pearson's r, Looking for Correlations peephole connections, Peephole Connections penalties (see rewards, in RL) percentiles, Take a Quick Look at the Data Structure Perceptron convergence theorem, The Perceptron Perceptrons, The Perceptron-Multi-Layer Perceptron and Backpropagationversus Logistic Regression, The Perceptron training, The Perceptron-The Perceptron performance measures, Select a Performance Measure-Select a Performance Measureconfusion matrix, Confusion Matrix-Confusion Matrix cross-validation, Measuring Accuracy Using Cross-Validation-Measuring Accuracy Using Cross-Validation precision and recall, Precision and Recall-Precision/Recall Tradeoff ROC (receiver operating characteristic) curve, The ROC Curve-The ROC Curve performance scheduling, Learning Rate Scheduling permutation(), Create a Test Set PG algorithms, Policy Gradients photo-hosting services, Semisupervised learning pinning operations, Pinning Operations Across Tasks pip, Create the Workspace Pipeline constructor, Transformation Pipelines-Select and Train a Model pipelines, Frame the Problem placeholder nodes, Feeding Data to the Training Algorithm placers (see simple placer; dynamic placer) policy, Policy Search policy gradients, Policy Search (see PG algorithms) policy space, Policy Search polynomial features, adding, Nonlinear SVM Classification-Polynomial Kernel polynomial kernel, Polynomial Kernel-Polynomial Kernel, Kernelized SVM Polynomial Regression, Training Models, Polynomial Regression-Polynomial Regressionlearning curves in, Learning Curves-Learning Curves pooling kernel, Pooling Layer pooling layer, Pooling Layer-Pooling Layer power scheduling, Learning Rate Scheduling precision, Confusion Matrix precision and recall, Precision and Recall-Precision/Recall TradeoffF-1 score, Precision and Recall-Precision and Recall precision/recall (PR) curve, The ROC Curve precision/recall tradeoff, Precision/Recall Tradeoff-Precision/Recall Tradeoff predetermined piecewise constant learning rate, Learning Rate Scheduling predict(), Data Cleaning predicted class, Confusion Matrix predictions, Confusion Matrix-Confusion Matrix, Decision Function and Predictions-Decision Function and Predictions, Making Predictions-Estimating Class Probabilities predictors, Supervised learning, Data Cleaning preloading training data, Preload the data into a variable PReLU (parametric leaky ReLU), Nonsaturating Activation Functions preprocessed attributes, Take a Quick Look at the Data Structure pretrained layers reuse, Reusing Pretrained Layers-Pretraining on an Auxiliary Taskauxiliary task, Pretraining on an Auxiliary Task-Pretraining on an Auxiliary Task caching frozen layers, Caching the Frozen Layers freezing lower layers, Freezing the Lower Layers model zoos, Model Zoos other frameworks, Reusing Models from Other Frameworks TensorFlow model, Reusing a TensorFlow Model-Reusing a TensorFlow Model unsupervised pretraining, Unsupervised Pretraining-Unsupervised Pretraining upper layers, Tweaking, Dropping, or Replacing the Upper Layers Pretty Tensor, Up and Running with TensorFlow primal problem, The Dual Problem principal component, Principal Components Principal Component Analysis (PCA), PCA-Randomized PCAexplained variance ratios, Explained Variance Ratio finding principal components, Principal Components-Principal Components for compression, PCA for Compression-Incremental PCA Incremental PCA, Incremental PCA-Randomized PCA Kernel PCA (kPCA), Kernel PCA-Selecting a Kernel and Tuning Hyperparameters projecting down to d dimensions, Projecting Down to d Dimensions Randomized PCA, Randomized PCA Scikit Learn for, Using Scikit-Learn variance, preserving, Preserving the Variance-Preserving the Variance probabilistic autoencoders, Variational Autoencoders probabilities, estimating, Estimating Probabilities-Estimating Probabilities, Estimating Class Probabilities producer functions, Other convenience functions projection, Projection-Projection propositional logic, From Biological to Artificial Neurons pruning, Regularization Hyperparameters, Symbolic Differentiation Pythonisolated environment in, Create the Workspace-Create the Workspace notebooks in, Create the Workspace-Download the Data pickle, Better Evaluation Using Cross-Validation pip, Create the Workspace Q Q-Learning algorithm, Temporal Difference Learning and Q-Learning-Learning to Play Ms.


pages: 245 words: 12,162

In Pursuit of the Traveling Salesman: Mathematics at the Limits of Computation by William J. Cook

complexity theory, computer age, Computer Numeric Control, four colour theorem, index card, John von Neumann, linear programming, NP-complete, P = NP, p-value, RAND corporation, Richard Feynman, traveling salesman, Turing machine

To compute this value, we find the minimum of the four sums trip({2, 3, 4, 5}, 2) + cost(2,6) trip({2, 3, 4, 5}, 3) + cost(3,6) trip({2, 3, 4, 5}, 4) + cost(4,6) trip({2, 3, 4, 5}, 5) + cost(5,6) corresponding to the possible choices for the next-to-last city in the subpath from 1 to 6, that is, we optimally travel to the next-to-last city then travel over to city 6. This construction of a five-city tr i p-value from several four-city values is the heart of the Held-Karp method. The algorithm proceeds as follows. We first compute all one-city values: these are easy, for example, tr i p({2}, 2) is just cos t(1, 2). Next, we use the one-city values to compute all two-city values. Then we use the two-city values to compute all three-city values, and on up the line. When we finally get to the (n − 1)-city values, we can read off the cost of an optimal tour: it is the minimum of the sums trip({2,3,. . . , n}, 2) + cost(2,1) trip({2,3,. . . , n}, 3) + cost(3,1) ··· trip({2,3,. . . , n}, n) + cost(n,1) where the cost term accounts for the return trip back to city 1.


pages: 242 words: 68,019

Why Information Grows: The Evolution of Order, From Atoms to Economies by Cesar Hidalgo

"Robert Solow", Ada Lovelace, Albert Einstein, Arthur Eddington, assortative mating, business cycle, Claude Shannon: information theory, David Ricardo: comparative advantage, Douglas Hofstadter, Everything should be made as simple as possible, frictionless, frictionless market, George Akerlof, Gödel, Escher, Bach, income inequality, income per capita, industrial cluster, information asymmetry, invention of the telegraph, invisible hand, Isaac Newton, James Watt: steam engine, Jane Jacobs, job satisfaction, John von Neumann, Joi Ito, New Economic Geography, Norbert Wiener, p-value, Paul Samuelson, phenotype, price mechanism, Richard Florida, Ronald Coase, Rubik’s Cube, Silicon Valley, Simon Kuznets, Skype, statistical model, Steve Jobs, Steve Wozniak, Steven Pinker, The Market for Lemons, The Nature of the Firm, The Wealth of Nations by Adam Smith, total factor productivity, transaction costs, working-age population

Here we consider a country to be an exporter of a product if its percapita exports of that product are at least 25 percent of the world’s average per capita exports of that product. This allows us to control for the size of the product’s global market and the size of the country’s population. 5. In the case of Honduras and Argentina the probability of the observed overlap (what is known academically as its p-value) is 4.4 × 10–4. The same probability is 2 × 10–2 for the overlap observed between Honduras and the Netherlands and 4 × 10–3 for the overlap observed between Argentina and the Netherlands. 6. César A. Hidalgo and Ricardo Hausmann, “The Building Blocks of Economic Complexity,” Proceedings of the National Academy of Sciences 106, no. 26 (2009): 10570–10575. 7. The idea of related varieties is popular in the literature of regional economic development and strategic management.


pages: 366 words: 76,476

Dataclysm: Who We Are (When We Think No One's Looking) by Christian Rudder

4chan, Affordable Care Act / Obamacare, bitcoin, cloud computing, correlation does not imply causation, crowdsourcing, cuban missile crisis, Donald Trump, Edward Snowden, en.wikipedia.org, Frank Gehry, Howard Zinn, Jaron Lanier, John Markoff, John Snow's cholera map, lifelogging, Mahatma Gandhi, Mikhail Gorbachev, Nate Silver, Nelson Mandela, new economy, obamacare, Occupy movement, p-value, pre–internet, race to the bottom, selection bias, Snapchat, social graph, Solar eclipse in 1919, Steve Jobs, the scientific method

For issues that have to do with sex only indirectly, such as ratings from one race to another, gays and straights also show similar patterns. Male-female relationships allowed for the least repetition and widest resonance per unit of space, so I made the choice to focus on them. My second decision, to leave out statistical esoterica, was made with much less regret. I don’t mention confidence intervals, sample sizes, p values, and similar devices in Dataclysm because the book is above all a popularization of data and data science. Mathematical wonkiness wasn’t what I wanted to get across. But like the spars and crossbeams of a house, the rigor is no less present for being unseen. Many of the findings in the book are drawn from academic, peer-reviewed sources. I applied the same standards to the research I did myself, including a version of peer-review: much of the OkCupid analysis was performed first by me and then verified independently by an employee of the company.


Hands-On Machine Learning With Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems by Aurelien Geron

Amazon Mechanical Turk, Bayesian statistics, centre right, combinatorial explosion, constrained optimization, correlation coefficient, crowdsourcing, en.wikipedia.org, iterative process, Netflix Prize, NP-complete, optical character recognition, P = NP, p-value, pattern recognition, performance metric, recommendation engine, self-driving car, SpamAssassin, speech recognition, statistical model

Note Other algorithms work by first training the Decision Tree without restrictions, then pruning (deleting) unnecessary nodes. A node whose children are all leaf nodes is considered unnecessary if the purity improvement it provides is not statistically significant. Standard statistical tests, such as the χ2 test, are used to estimate the probability that the improvement is purely the result of chance (which is called the null hypothesis). If this probability, called the p-value, is higher than a given threshold (typically 5%, controlled by a hyperparameter), then the node is considered unnecessary and its children are deleted. The pruning continues until all unnecessary nodes have been pruned. Figure 6-3 shows two Decision Trees trained on the moons dataset (introduced in Chapter 5). On the left, the Decision Tree is trained with the default hyperparameters (i.e., no restrictions), and on the right the Decision Tree is trained with min_samples_leaf=4.


pages: 263 words: 75,455

Quantitative Value: A Practitioner's Guide to Automating Intelligent Investment and Eliminating Behavioral Errors by Wesley R. Gray, Tobias E. Carlisle

activist fund / activist shareholder / activist investor, Albert Einstein, Andrei Shleifer, asset allocation, Atul Gawande, backtesting, beat the dealer, Black Swan, business cycle, butter production in bangladesh, buy and hold, capital asset pricing model, Checklist Manifesto, cognitive bias, compound rate of return, corporate governance, correlation coefficient, credit crunch, Daniel Kahneman / Amos Tversky, discounted cash flows, Edward Thorp, Eugene Fama: efficient market hypothesis, forensic accounting, hindsight bias, intangible asset, Louis Bachelier, p-value, passive investing, performance metric, quantitative hedge fund, random walk, Richard Thaler, risk-adjusted returns, Robert Shiller, Robert Shiller, shareholder value, Sharpe ratio, short selling, statistical model, survivorship bias, systematic trading, The Myth of the Rational Market, time value of money, transaction costs

We control for general market risk using the capital asset pricing model2; we adjust for market, size, and value exposures with the Fama and French three-factor model3; we account for momentum using the four-factor model4; and, finally, we account for liquidity by adding the Lubos Pastor and Robert Stambaugh market-wide liquidity factor to create the comprehensive five-factor model.5 Figures 12.10(a) and (b) confirm that the Quantitative Value strategy consistently generates alpha on rolling 5- and 10-year bases, regardless of the model we choose to inspect. On a rolling 5-year basis there are only a few short instances where the strategy's performance does not add value after controlling for risk. The 10-year rolling chart tells the story vividly: over the long-term, Quantitative Value has consistently created value for investors. Table 12.5 shows the full sample coefficient estimates for the four asset-pricing models. We set out P-values below each estimate and represent the probability of seeing the estimate given the null hypothesis is zero. MKT-RF represents the excess return on the market-weight returns of all New York Stock Exchange (NYSE)/American Stock Exchange (AMEX)/Nasdaq stocks. SMB is a long/short factor portfolio that captures exposures to small capitalization stocks. HML is a long/short factor portfolio that controls for exposure to high book value-to-market capitalization stocks.


pages: 741 words: 199,502

Human Diversity: The Biology of Gender, Race, and Class by Charles Murray

23andMe, affirmative action, Albert Einstein, Alfred Russel Wallace, Asperger Syndrome, assortative mating, basic income, bioinformatics, Cass Sunstein, correlation coefficient, Daniel Kahneman / Amos Tversky, double helix, Drosophila, epigenetics, equal pay for equal work, European colonialism, feminist movement, glass ceiling, Gunnar Myrdal, income inequality, Kenneth Arrow, labor-force participation, longitudinal study, meta analysis, meta-analysis, out of africa, p-value, phenotype, publication bias, quantitative hedge fund, randomized controlled trial, replication crisis, Richard Thaler, risk tolerance, school vouchers, Scientific racism, selective serotonin reuptake inhibitor (SSRI), Silicon Valley, social intelligence, statistical model, Steven Pinker, The Bell Curve by Richard Herrnstein and Charles Murray, the scientific method, The Wealth of Nations by Adam Smith, theory of mind, Thomas Kuhn: the structure of scientific revolutions, twin studies, universal basic income, working-age population

A more precise description is given in the note.[79] The Johnson study presented the results for all 42 tests, but calculated effect sizes only for those that met a stricter than normal standard of statistical significance (p < .01 instead of p < .05) because of the large number of tests involved. Results for the residual effects on 21 of the subtests that met that statistical standard are shown in the following table. I omit the p values. All but two of the p values for the residual effects were at the .001 level.80 The effect sizes stripped of g are ordered from the largest for females (positive) to the largest for males (negative). COGNITIVE SEX DIFFERENCES IN THE MISTRA SAMPLE Assessment activity: Coding (ID of symbol-number pairings) Overall effect size: +0.56 Effect size stripped of g: +0.83 Assessment activity: Perceptual speed (evaluation of symbol pairs) Overall effect size: +0.37 Effect size stripped of g: +0.68 Assessment activity: Spelling (multiple choice) Overall effect size: ns Effect size stripped of g: +0.66 Assessment activity: Word fluency (production of anagrams) Overall effect size: ns Effect size stripped of g: +0.64 Assessment activity: ID of familial relationships within a family tree Overall effect size: ns Effect size stripped of g: +0.63 Assessment activity: Rote memorization of meaningful pairings Overall effect size: +0.33 Effect size stripped of g: +0.60 Assessment activity: Production of words beginning and ending with specified letters Overall effect size: ns Effect size stripped of g: +0.57 Assessment activity: Vocabulary (multiple choice) Overall effect size: ns Effect size stripped of g: +0.50 Assessment activity: Rote memorization of meaningless pairings Overall effect size: ns Effect size stripped of g: +0.42 Assessment activity: Chronological sequencing of pictures Overall effect size: –0.28 Effect size stripped of g: –0.30 Assessment activity: Information (recall of factual knowledge) Overall effect size: –0.29 Effect size stripped of g: –0.39 Assessment activity: Trace of a path through a grid of dots Overall effect size: –0.42 Effect size stripped of g: –0.40 Assessment activity: Matching of rotated alternatives to probe Overall effect size: ns Effect size stripped of g: –0.45 Assessment activity: Reproduction of 2-D designs of 3-D blocks Overall effect size: –0.34 Effect size stripped of g: –0.48 Assessment activity: Outline of cutting instructions to form the target figure Overall effect size: –0.39 Effect size stripped of g: –0.48 Assessment activity: Arithmetic (mental calculation of problems presented verbally) Overall effect size: –0.36 Effect size stripped of g: –0.53 Assessment activity: ID of unfolded version of a folded probe Overall effect size: –0.44 Effect size stripped of g: –0.59 Assessment activity: ID of matched figures after rotation Overall effect size: –0.55 Effect size stripped of g: –0.75 Assessment activity: ID of parts missing in pictures of common objects Overall effect size: –0.60 Effect size stripped of g: –0.81 Assessment activity: ID of rotated versions of 2-D representation of 3-D objects Overall effect size: –0.92 Effect size stripped of g: –1.04 Assessment activity: ID of mechanical principles and tools Overall effect size: –1.18 Effect size stripped of g: –1.43 Source: Adapted from Johnson and Bouchard (2007): Table 4.


pages: 301 words: 85,126

AIQ: How People and Machines Are Smarter Together by Nick Polson, James Scott

Air France Flight 447, Albert Einstein, Amazon Web Services, Atul Gawande, autonomous vehicles, availability heuristic, basic income, Bayesian statistics, business cycle, Cepheid variable, Checklist Manifesto, cloud computing, combinatorial explosion, computer age, computer vision, Daniel Kahneman / Amos Tversky, Donald Trump, Douglas Hofstadter, Edward Charles Pickering, Elon Musk, epigenetics, Flash crash, Grace Hopper, Gödel, Escher, Bach, Harvard Computers: women astronomers, index fund, Isaac Newton, John von Neumann, late fees, low earth orbit, Lyft, Magellanic Cloud, mass incarceration, Moneyball by Michael Lewis explains big data, Moravec's paradox, more computing power than Apollo, natural language processing, Netflix Prize, North Sea oil, p-value, pattern recognition, Pierre-Simon Laplace, ransomware, recommendation engine, Ronald Reagan, self-driving car, sentiment analysis, side project, Silicon Valley, Skype, smart cities, speech recognition, statistical model, survivorship bias, the scientific method, Thomas Bayes, Uber for X, uber lyft, universal basic income, Watson beat the top human players on Jeopardy!, young professional

It merely knows that the question is about co-location statistics involving other words that we, as humans, can subsequently interpret as being about computers. *   We also simulated the 24 games prior to game 1 of the 2007 season, so that the 25-game average was well-defined at the beginning of the 176-game stretch in question. This implies that the rolling 25-game winning percentage starting from that first game in 2007 actually went back to mid-2005. †   If you’ve taken a statistics class, you may recognize this number as the p-value (p = 0.23) under the null hypothesis of no cheating. ‡   Two synonyms for anomalies that you may have encountered are “signals in the noise” or “violations of the null hypothesis.” §   This inscription, Decus et Tutamen, remained on English coins into 2017, when it was sadly removed from the latest version of the £1 coin. *   The Trial owes its name to a room in Westminster Abbey, the Chamber of the Pyx.


pages: 271 words: 83,944

The Sellout: A Novel by Paul Beatty

affirmative action, cognitive dissonance, conceptual framework, desegregation, El Camino Real, haute couture, illegal immigration, Lao Tzu, late fees, mass incarceration, p-value, publish or perish, rolodex, Ronald Reagan, Rosa Parks, telemarketer, theory of mind, War on Poverty, white flight, yellow journalism

Back then he was an assistant professor in urban studies, at UC Brentwood, living in Larchmont with the rest of the L.A. intellectual class, and hanging out in Dickens doing field research for his first book, Blacktopolis: The Intransigence of African-American Urban Poverty and Baggy Clothes. “I think an examination of the confluence of independent variables on income could result in some interesting r coefficients. Frankly, I wouldn’t be surprised by p values in the .75 range.” Despite the smug attitude, Pops took a liking to Foy right away. Though Foy was born and raised in Michigan, it wasn’t often Dad found somebody in Dickens who knew the difference between a t-test and an analysis of variance. After debriefing over a box of donut holes, everyone—locals and Foy included—agreed to meet on a regular basis, and the Dum Dum Donut Intellectuals were born.


The Impact of Early Life Trauma on Health and Disease by Lanius, Ruth A.; Vermetten, Eric; Pain, Clare

conceptual framework, correlation coefficient, delayed gratification, epigenetics, false memory syndrome, impulse control, intermodal, longitudinal study, meta analysis, meta-analysis, Nelson Mandela, p-value, phenotype, randomized controlled trial, selective serotonin reuptake inhibitor (SSRI), social intelligence, Socratic dialogue, theory of mind, twin studies, yellow journalism

(a) Women Lifetime History of Depression (%) Men 60 50 40 30 20 10 0 0 1 2 3 4 ACE Score (b) 25 4+ 20 15 3 10 2 5 1 0 0 ACE Score (c) Psychiatric disorders 100 90 80 70 60 50 40 30 20 10 0 4 5 or more 3 2 1 0 ACE Score (d) 12 Abused Alcohol Ever Hallucinated* (%) The relationship between ACE Score and self-acknowledged chronic depression is illustrated in Fig. 8.1a [5]. Should one doubt the reliability of self-acknowledged chronic depression, there is a similar but stronger relationship between ACE Score and later suicide attempts, as shown in the exponential progression of Fig. 8.1b [6]. The p value of all graphic depictions herein is 0.001 or lower. One continues to see a proportionate relationship between ACE Score and depression by analysis of prescription rates for antidepressant medications after a 10-year prospective follow-up, now approximately 50 to 60 years after the ACEs occurred (Fig. 8.1c) [7]. It would appear that depression, often unrecognized in medical practice, is in fact common and has deep roots, commonly going back to the developmental years of life. 80 70 Attempting Suicide (%) household dysfunction –â•fi mother treated violently (13%) –â•fi household member was alcoholic or drug user (27%) –â•fi household member was imprisoned (6%) –â•fi household member was chronically depressed, suicidal, mentally ill or in psychiatric hospital (17%) –â•fi not raised by both biological parents (23%) • neglect –â•fi physical (10%) –â•fi emotional (15%).

These 115 Section 2: Biological approaches to early life trauma –0.69 (p < 10–7) Frontal Cortex Gray Matter Volume Area Rostral Body Corpus Callosm Hippocampal Volume –0.25 (p < 0.05) –0.48 (p<0.005) –0.44 (p <0.005) Density Abuse 14–16 years Density Abuse 3–5 years 0.41 (p < 0.005) Density Abuse 6–8 years 0.58 (p<0.0001) Density Abuse 11–13 years 0.44 Density (p <0.005) Abuse 9–10 years Fig. 11.1.╇ Path analysis indicating relationships between density of abuse during different stages of development and measures of brain size derived from structural equation modeling (Amos Graphics). Path analysis examined two main components. The first was that child sexual abuse (CSA) (or absence of CSA) during one period would predict CSA (or absence of CSA) during the subsequent period. The second component examined the association between density of CSA during each stage and all morphometric measures. Numerical values represent standardized beta weights and their associated p values. The dotted lines were evaluated in the model but were not significantly predictive of any relationship between the variables. (From Andersen et al. [23] with permission.) individuals had significantly reduced occipital GMV. However, it appeared that loss of GMV was a consequence of exposure to childhood abuse and not a result of intimate-partner violence or development of PTSD [44]. Sensitive periods 116 Based on differential rates of maturation, we have hypothesized that specific brain regions should have differing periods of sensitivity to the effects of abuse [23].


Monte Carlo Simulation and Finance by Don L. McLeish

Black-Scholes formula, Brownian motion, capital asset pricing model, compound rate of return, discrete time, distributed generation, finite state, frictionless, frictionless market, implied volatility, incomplete markets, invention of the printing press, martingale, p-value, random walk, Sharpe ratio, short selling, stochastic process, stochastic volatility, survivorship bias, the market place, transaction costs, value at risk, Wiener process, zero-coupon bond, zero-sum game

Evaluate the Chi-squared statistic χ2obs for a test that these points are independent uniform on the cube where we divide the cube into 8 subcubes, each having sides of length 1/2. Carry out the test by finding P [χ2 > χ2obs ] where χ2 is a random chi-squared variate with the appropriate number of degrees of freedom. This quantity P [χ2 > χ2obs ] is usually referrred to as the “significance probability” or “p-value” for the test. If we suspected too much uniformity to be consistent with assumption of independent uniform, we might use the other tail of the test, i.e. evaluate P [χ2 < χ2obs ]. Do so and comment on your results.


pages: 340 words: 94,464

Randomistas: How Radical Researchers Changed Our World by Andrew Leigh

Albert Einstein, Amazon Mechanical Turk, Anton Chekhov, Atul Gawande, basic income, Black Swan, correlation does not imply causation, crowdsourcing, David Brooks, Donald Trump, ending welfare as we know it, Estimating the Reproducibility of Psychological Science, experimental economics, Flynn Effect, germ theory of disease, Ignaz Semmelweis: hand washing, Indoor air pollution, Isaac Newton, Kickstarter, longitudinal study, loss aversion, Lyft, Marshall McLuhan, meta analysis, meta-analysis, microcredit, Netflix Prize, nudge unit, offshore financial centre, p-value, placebo effect, price mechanism, publication bias, RAND corporation, randomized controlled trial, recommendation engine, Richard Feynman, ride hailing / ride sharing, Robert Metcalfe, Ronald Reagan, statistical model, Steven Pinker, uber lyft, universal basic income, War on Poverty

A meta-analytic review of choice overload’, Journal of Consumer Research, vol. 37, no. 3, 2010, pp. 409–25. 45Alan Gerber & Neil Malhotra, ‘Publication bias in empirical sociological research’, Sociological Methods & Research, vol. 37, no. 1, 2008, pp. 3–30; Alan Gerber & Neil Malhotra, ‘Do statistical reporting standards affect what is published? Publication bias in two leading political science journals’, Quarterly Journal of Political Science. vol. 3, no. 3, 2008, pp. 313–26; E.J. Masicampo & Daniel R. Lalande, ‘A peculiar prevalence of p values just below .05’, Quarterly Journal of Experimental Psychology, vol. 65, no. 11, 2012, pp. 2271–9; Kewei Hou, Chen Xue & Lu Zhang, ‘Replicating anomalies’, NBER Working Paper 23394, Cambridge, MA: National Bureau of Economic Research, 2017. 46Alexander A. Aarts, Joanna E. Anderson, Christopher J. Anderson, et al., ‘Estimating the reproducibility of psychological science’, Science, vol. 349, no. 6251, 2015. 47This represented two out of eighteen papers: John P.A.


pages: 305 words: 89,103

Scarcity: The True Cost of Not Having Enough by Sendhil Mullainathan

American Society of Civil Engineers: Report Card, Andrei Shleifer, Cass Sunstein, clean water, computer vision, delayed gratification, double entry bookkeeping, Exxon Valdez, fault tolerance, happiness index / gross national happiness, impulse control, indoor plumbing, inventory management, knowledge worker, late fees, linear programming, mental accounting, microcredit, p-value, payday loans, purchasing power parity, randomized controlled trial, Report Card for America’s Infrastructure, Richard Thaler, Saturday Night Live, Walter Mischel, Yogi Berra

R. Flynn, “Massive IQ Gains in 14 Nations: What IQ Tests Really Measure,” Psychological Bulletin 101 (1987): 171–91. A forceful case for environmental and cultural influences on IQ is Richard Nisbett’s Intelligence and How to Get It: Why Schools and Cultures Count (New York: W. W. Norton, 2010). people in a New Jersey mall: These experiments are summarized along with details on sample sizes and p-values in Anandi Mani, Sendhil Mullainathan, Eldar Shafir, and Jiaying Zhao, “Poverty Impedes Cognitive Function” (working paper, 2012). unable to come up with $2,000 in thirty days: A. Lusardi, D. J. Schneider, and P. Tufano, Financially Fragile Households: Evidence and Implications (National Bureau of Economic Research, Working Paper No. 17072, May 2011). the effects were equally big: For those interested in the magnitude, the effect size ranged between Cohen’s d of 0.88 and 0.94.


pages: 1,088 words: 228,743

Expected Returns: An Investor's Guide to Harvesting Market Rewards by Antti Ilmanen

Andrei Shleifer, asset allocation, asset-backed security, availability heuristic, backtesting, balance sheet recession, bank run, banking crisis, barriers to entry, Bernie Madoff, Black Swan, Bretton Woods, business cycle, buy and hold, buy low sell high, capital asset pricing model, capital controls, Carmen Reinhart, central bank independence, collateralized debt obligation, commoditize, commodity trading advisor, corporate governance, credit crunch, Credit Default Swap, credit default swaps / collateralized debt obligations, debt deflation, deglobalization, delta neutral, demand response, discounted cash flows, disintermediation, diversification, diversified portfolio, dividend-yielding stocks, equity premium, Eugene Fama: efficient market hypothesis, fiat currency, financial deregulation, financial innovation, financial intermediation, fixed income, Flash crash, framing effect, frictionless, frictionless market, G4S, George Akerlof, global reserve currency, Google Earth, high net worth, hindsight bias, Hyman Minsky, implied volatility, income inequality, incomplete markets, index fund, inflation targeting, information asymmetry, interest rate swap, invisible hand, Kenneth Rogoff, laissez-faire capitalism, law of one price, London Interbank Offered Rate, Long Term Capital Management, loss aversion, margin call, market bubble, market clearing, market friction, market fundamentalism, market microstructure, mental accounting, merger arbitrage, mittelstand, moral hazard, Myron Scholes, negative equity, New Journalism, oil shock, p-value, passive investing, Paul Samuelson, performance metric, Ponzi scheme, prediction markets, price anchoring, price stability, principal–agent problem, private sector deleveraging, purchasing power parity, quantitative easing, quantitative trading / quantitative finance, random walk, reserve currency, Richard Thaler, risk tolerance, risk-adjusted returns, risk/return, riskless arbitrage, Robert Shiller, Robert Shiller, savings glut, selection bias, Sharpe ratio, short selling, sovereign wealth fund, statistical arbitrage, statistical model, stochastic volatility, stocks for the long run, survivorship bias, systematic trading, The Great Moderation, The Myth of the Rational Market, too big to fail, transaction costs, tulip mania, value at risk, volatility arbitrage, volatility smile, working-age population, Y2K, yield curve, zero-coupon bond, zero-sum game

Given that long-term Treasury yields are below 4%, few observers would extrapolate the realized 4.7% average bond returns into the future. Similar considerations suggest that we might reduce the CPI and D/P components for equities. The fourth column shows that using 2.3% CPI (consensus forecast for long-term inflation) and 2.0% D/P, a forward-looking measure predicts only 5.6% nominal equity returns for the long term. Admittedly the D/P value could be raised if we use a broader carry measure including net share buybacks, so I add 0.75% to the estimate (and call it “D/P+”). Even more bullish return forecasts than 6.4% would have to rely on growth optimism (beyond the historical 1.3% rate of real earnings-per-share growth) or expected further P/E expansion in the coming decades (my analysis assumes none). More generally, these building blocks give us a useful framework for debating the key components of future equity returns.

Should long and short portfolios have simply equal nominal amounts, equal return volatilities, or equal betas? One crucial question is whether persistent industry sector biases should be allowed or whether sector neutrality should be pursued. Sector neutrality. Practitioner studies highlight the empirical benefits of sector-neutral approaches. Yet, academic studies and many popular investment products (FF and LSV, MSCI-Barra and S&P value/growth indices, and the RAFI fundamental index) do nothing to impose sector neutrality. Without any such adjustments, persistent industry concentrations are possible in the long–short portfolio. For example, in early 2008, the long (value) portfolio heavily overweighted finance stocks while the short (growth) portfolio overweighted energy stocks. Such sector biases may or may not boost average returns but they pretty clearly impair value portfolio diversification and thus raise its volatility.


pages: 354 words: 26,550

High-Frequency Trading: A Practical Guide to Algorithmic Strategies and Trading Systems by Irene Aldridge

algorithmic trading, asset allocation, asset-backed security, automated trading system, backtesting, Black Swan, Brownian motion, business cycle, business process, buy and hold, capital asset pricing model, centralized clearinghouse, collapse of Lehman Brothers, collateralized debt obligation, collective bargaining, computerized trading, diversification, equity premium, fault tolerance, financial intermediation, fixed income, high net worth, implied volatility, index arbitrage, information asymmetry, interest rate swap, inventory management, law of one price, Long Term Capital Management, Louis Bachelier, margin call, market friction, market microstructure, martingale, Myron Scholes, New Journalism, p-value, paper trading, performance metric, profit motive, purchasing power parity, quantitative trading / quantitative finance, random walk, Renaissance Technologies, risk tolerance, risk-adjusted returns, risk/return, Sharpe ratio, short selling, Small Order Execution System, statistical arbitrage, statistical model, stochastic process, stochastic volatility, systematic trading, trade route, transaction costs, value at risk, yield curve, zero-sum game

Table 4.5 reports summary statistics for EUR/USD order flows observed by Citibank and sampled at the weekly frequency between January 1993 and July 1999: A) statistics for weekly EUR/USD order flow aggregated across Citibank’s corporate, trading, and investing customers; and B) order flows from end-user segments cumulated over a week. The last four columns on the right report autocorrelations i at lag i and p-values for the null that (i = 0). The summary statistics on the order flow data are from Evans and Lyons (2007), who define order flow as the total value of EUR/USD purchases (in USD millions) initiated against Citibank’s quotes. Daily Dollar Volume in Most Active Foreign Exchange Products on TABLE 4.4 CME Electronic Trading (Globex) on 6/12/2009 Computed as Average Price Times Total Contract Volume Reported by CME Currency Futures Daily Volume (in USD thousands) Mini-Futures Daily Volume (in USD thousands) Australian Dollar British Pound Canadian Dollar Euro Japanese Yen New Zealand Dollar Swiss Franc 5,389.8 17,575.6 6,988.1 32,037.9 8,371.5 426.5 4,180.6 N/A N/A N/A 525.3 396.2 N/A N/A 45 3.722 −3.715 549.302 −529.055 634.918 −692.419 1710.163 −2024.28 972.106 −629.139 535.32 −874.15 1881.284 −718.895 −0.043 1.234 −16.774 108.685 −59.784 196.089 −4.119 346.296 11.187 183.36 19.442 146.627 15.85 273.406 Maximum Minimum −0.696 9.246 −0.005 3.908 0.026 8.337 0.392 5.86 −1.079 11.226 0.931 9.253 0.105 3.204 Skewness or Kurtosis* −0.037 (0.434) 0.072 (0.223) −0.021 (0.735) −0.098 (0.072) 0.096 (0.085) 0.061 (0.182) −0.061 (0.287) 1 −0.04 (0.608) 0.089 (0.124) 0.024 (0.602) 0.024 (0.660) −0.024 (0.568) 0.107 (0.041) 0.027 (0.603) 2 0.028 (0.569) −0.038 (0.513) 0.126 (0.101) 0.015 (0.747) −0.03 (0.536) −0.03 (0.550) 0.025 (0.643) 4 Autocorrelations Lag −0.028 (0.562) 0.103 (0.091) −0.009 (0.897) 0.083 (0.140) −0.016 (0.690) −0.014 (0.825) −0.015 (0.789) 8 *Skewness of order flows measures whether the flows skew toward either the positive or the negative side of their mean, and kurtosis indicates the likelihood of extremely large or small order flows.


pages: 628 words: 107,927

Node.js in Action by Mike Cantelon, Marc Harter, Tj Holowaychuk, Nathan Rajlich

Amazon Web Services, Chris Wanstrath, create, read, update, delete, Debian, en.wikipedia.org, Firefox, Google Chrome, MITM: man-in-the-middle, MVC pattern, node package manager, p-value, pull request, Ruby on Rails, web application, WebSocket

</p> Jade also supports a non-JavaScript form of iteration: the each statement. each statements allow you to cycle through arrays and object properties with ease. The following is equivalent to the previous example, but using each instead: each message in messages p= message You can cycle through object properties using a slight variation, like this: each value, key in post div strong #{key} p value Conditionally rendering template code Sometimes templates need to make decisions about how data is displayed depending on the value of the data. The next example illustrates a conditional in which, roughly half the time, the script tag is outputted as HTML: - var n = Math.round(Math.random() * 1) + 1 - if (n == 1) { script alert('You win!'); - } Conditionals can also be written in Jade using a cleaner, alternative form: - var n = Math.round(Math.random() * 1) + 1 if n == 1 script alert('You win!')


pages: 430 words: 107,765

The Quantum Magician by Derek Künsken

commoditize, epigenetics, industrial robot, iterative process, microbiome, orbital mechanics / astrodynamics, p-value, pattern recognition, Schrödinger's Cat, Turing test

That’s like a reporter gene in a virus, so we can know where it has penetrated.” “It has penetrated habitat and communications, but not fortifications,” the major said. “Yes, and its infection of habitat and comms is very selective. The distribution suggests to me that it has infected support systems.” “That’s not random,” Cassandra said. “No.” Cassandra had a brief urge to recalculate the p-value to verify the non-randomness, but Iekanjika wouldn’t care and Bel would already have calculated it. “The infection pattern doesn’t follow the systems architecture, but this pattern could have been made by selectively shielding critical systems prior to infection,” Bel said. “So the Puppets know something is up,” Cassandra said. “Definitely.” “Then the mission is off?” Saint Matthew said. “Turn the ship around.”


pages: 370 words: 107,983

Rage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All by Robert Elliott Smith

Ada Lovelace, affirmative action, AI winter, Alfred Russel Wallace, Amazon Mechanical Turk, animal electricity, autonomous vehicles, Black Swan, British Empire, cellular automata, citizen journalism, Claude Shannon: information theory, combinatorial explosion, corporate personhood, correlation coefficient, crowdsourcing, Daniel Kahneman / Amos Tversky, desegregation, discovery of DNA, Douglas Hofstadter, Elon Musk, Fellow of the Royal Society, feminist movement, Filter Bubble, Flash crash, Gerolamo Cardano, gig economy, Gödel, Escher, Bach, invention of the wheel, invisible hand, Jacquard loom, Jacques de Vaucanson, John Harrison: Longitude, John von Neumann, Kenneth Arrow, low skilled workers, Mark Zuckerberg, mass immigration, meta analysis, meta-analysis, mutually assured destruction, natural language processing, new economy, On the Economy of Machinery and Manufactures, p-value, pattern recognition, Paul Samuelson, performance metric, Pierre-Simon Laplace, precariat, profit maximization, profit motive, Silicon Valley, social intelligence, statistical model, Stephen Hawking, stochastic process, telemarketer, The Bell Curve by Richard Herrnstein and Charles Murray, The Future of Employment, the scientific method, The Wealth of Nations by Adam Smith, The Wisdom of Crowds, theory of mind, Thomas Bayes, Thomas Malthus, traveling salesman, Turing machine, Turing test, twin studies, Vilfredo Pareto, Von Neumann architecture, women in the workforce

That does not mean with certainty that the person doing the test is in fact pregnant, because there are unreliable elements in the test: it could be a false positive. If you have a look at the test instructions, it will report (in some form, probably a table) four IF/THEN rules, with uncertainty factors If + then pregnant with P(pregnant|+) If – then pregnant with P(pregnant|−) If + then not pregnant with P(not pregnant|+) If – then not pregnant with P(not pregnant|−) where the four P values are probabilities. This situation reflects what Lane and Maxfield call truth uncertainty, where there is a clear true or false outcome in the case of a well-understood question or a well-posed problem. In this case, it is entirely appropriate to use the statistic heuristic, and that is in fact precisely what the probabilities in the rules above reflect. In the formulation of those rules and probabilities, a large population of women were tested, and Cardano’s ratios were computed from the statistics derived for those women, with their pluses and minuses, pregnancies and not pregnancies.


pages: 385 words: 118,901

Black Edge: Inside Information, Dirty Money, and the Quest to Bring Down the Most Wanted Man on Wall Street by Sheelah Kolhatkar

Bernie Madoff, Donald Trump, family office, fear of failure, financial deregulation, hiring and firing, income inequality, light touch regulation, locking in a profit, margin call, medical residency, mortgage debt, p-value, pets.com, Ponzi scheme, rent control, Ronald Reagan, short selling, Silicon Valley, Skype, The Predators' Ball

He was sure that some of his patients had benefited from the drug, and he hoped they’d be able to continue taking it. He told Martoma that in spite of the negative results, he was still hopeful that bapi might work, because he had observed some improvements in his own patients who were taking it. “I don’t know how you can say that when the statistical evidence shows otherwise,” Martoma said. He cited the exact p-values, a number that indicated whether a result was statistically significant or not, and a handful of other specific figures that had just been included in the presentation to the investigators. The results still hadn’t been publicly released. Ross was flabbergasted. How could Martoma possibly know about those details? It was as if Martoma had seen the presentation he had just seen. But he knew that was impossible.


pages: 384 words: 112,971

What’s Your Type? by Merve Emre

Albert Einstein, anti-communist, card file, correlation does not imply causation, Frederick Winslow Taylor, God and Mammon, Golden Gate Park, hiring and firing, index card, Isaac Newton, job satisfaction, late capitalism, means of production, Menlo Park, mutually assured destruction, Norman Mailer, p-value, Panopticon Jeremy Bentham, Ralph Waldo Emerson, Socratic dialogue, Stanford prison experiment, traveling salesman, upwardly mobile, uranium enrichment, women in the workforce

“Neither of these authors has had formal training in psychology, and consequently little of the very extensive evidence they have developed on the instrument is in a form for immediate assimilation by psychologists generally,” Chauncey warned his staff as he prepared them to start work on the indicator. “Indeed, many of the ideas employed are so different from what psychologists are accustomed to that it has sometimes been difficult to keep from rejecting the whole approach without first examining it closely enough.” Those who worshipped at the altar of facts and figures, of t-tests and p-values, had little patience for Isabel’s kitchen table experiments, the imprecise, if enthusiastic, attempts at validation that had accompanied Forms A, B, C, and D. (“I sometimes kind of shook in my shoes with the old [versions] because the scores would be coming out on the basis of so few questions,” she later recalled.) Although Isabel had taught herself some basic statistics when she was working for Hay during the war, the ETS staff dismissed her autodidacticism with quiet contempt.


pages: 320 words: 33,385

Market Risk Analysis, Quantitative Methods in Finance by Carol Alexander

asset allocation, backtesting, barriers to entry, Brownian motion, capital asset pricing model, constrained optimization, credit crunch, Credit Default Swap, discounted cash flows, discrete time, diversification, diversified portfolio, en.wikipedia.org, fixed income, implied volatility, interest rate swap, market friction, market microstructure, p-value, performance metric, quantitative trading / quantitative finance, random walk, risk tolerance, risk-adjusted returns, risk/return, Sharpe ratio, statistical arbitrage, statistical model, stochastic process, stochastic volatility, Thomas Bayes, transaction costs, value at risk, volatility smile, Wiener process, yield curve, zero-sum game

The probability value of the t statistics is also given for convenience, and this shows that whilst the constant term is not significant the log return on the S&P 500 is a very highly significant determinant of the Amex log returns. Table I.4.7 Coefficient estimates for the Amex and S&P 500 model Intercept S&P 500 rtn Coefficients Standard error t stat −00002 12885 00003 00427 −06665 301698 p value 05053 00000 Following the results in Table I.4.7, we may write the estimated model, with t ratios in parentheses, as Ŷ = −00002 + 12885 X −06665 301698 where X and Y are the daily log returns on the S&P 500 and on Amex, respectively. The Excel output automatically tests whether the explanatory variable should be included Introduction to Linear Regression 155 in the model, and with a t ratio of 30.1698 this is certainly the case.


pages: 351 words: 123,876

Beautiful Testing: Leading Professionals Reveal How They Improve Software (Theory in Practice) by Adam Goucher, Tim Riley

Albert Einstein, barriers to entry, Black Swan, call centre, continuous integration, Debian, Donald Knuth, en.wikipedia.org, Firefox, Grace Hopper, index card, Isaac Newton, natural language processing, p-value, performance metric, revision control, six sigma, software as a service, software patent, the scientific method, Therac-25, Valgrind, web application

However, some bugs are more subtle, and so more sophisticated tests may be necessary. The recommendation is to start with the simplest tests and work up to more advanced tests. The simplest tests, besides being easiest to implement, are also the easiest to understand. A software developer is more likely to respond well to being told, “Looks like the average of your generator is 7 when it should be 8,” than to being told, “I’m getting a small p-value from my Kolmogorov-Smirnov test.” Range Tests If a probability distribution has a limited range, the simplest thing to test is whether the output values fall in that range. For example, an exponential distribution produces only positive values. If your test detects a single negative value, you’ve found a bug. However, for other distributions, such as the normal, there are no theoretical bounds on the outputs; all output values are possible, though some values are exceptionally unlikely. 134 CHAPTER TEN There is one aspect of output ranges that cannot be tested effectively by black-box testing: boundary values.


pages: 1,829 words: 135,521

Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython by Wes McKinney

business process, Debian, Firefox, general-purpose programming language, Google Chrome, Guido van Rossum, index card, p-value, quantitative trading / quantitative finance, random walk, recommendation engine, sentiment analysis, side project, sorting algorithm, statistical model, type inference

This includes such submodules as: Regression models: Linear regression, generalized linear models, robust linear models, linear mixed effects models, etc. Analysis of variance (ANOVA) Time series analysis: AR, ARMA, ARIMA, VAR, and other models Nonparametric methods: Kernel density estimation, kernel regression Visualization of statistical model results statsmodels is more focused on statistical inference, providing uncertainty estimates and p-values for parameters. scikit-learn, by contrast, is more prediction-focused. As with scikit-learn, I will give a brief introduction to statsmodels and how to use it with NumPy and pandas. 1.4 Installation and Setup Since everyone uses Python for different applications, there is no single solution for setting up Python and required add-on packages. Many readers will not have a complete Python development environment suitable for following along with this book, so here I will give detailed instructions to get set up on each operating system.


Braiding Sweetgrass by Robin Kimmerer

back-to-the-land, clean water, commoditize, double helix, invisible hand, music of the spheres, oil shale / tar sands, p-value, Pepto Bismol, Potemkin village, scientific worldview, the built environment, the scientific method

We run experiments on the effects of salinity on the growth of invasive grasses. So we can fix it. We measure and record and analyze in ways that might seem lifeless but to us are the conduits to understanding the inscrutable lives of species not our own. Doing science with awe and humility is a powerful act of reciprocity with the more-than-human world. I’ve never met an ecologist who came to the field for the love of data or for the wonder of a p-value. These are just ways we have of crossing the species boundary, of slipping off our human skin and wearing fins or feathers or foliage, trying to know others as fully as we can. Science can be a way of forming intimacy and respect with other species that is rivaled only by the observations of traditional knowledge holders. It can be a path to kinship. These too are my people. Heart-driven scientists whose notebooks, smudged with salt marsh mud and filled with columns of numbers, are love letters to salmon.


pages: 336 words: 163,867

How to Diagnose and Fix Everything Electronic by Michael Geier

p-value, popular electronics, remote working

If you find no voltage at all, there could be a little sub-regulator on the board to power the micro, and it might be bad. Chapter 11 A-Hunting We Will Go: Signal Tracing and Diagnosis 201 If you see voltage there (typically 5 volts, but possibly less and very occasionally more) but no oscillation, the crystal may be dead. Without a clock to drive it, the micro will sit there like a rock. If you do see oscillation, check that its peak-to-peak (p-p) value is fairly close to the total power supply voltage running the micro. If it’s a 5-volt micro and the oscillation is 1 volt p-p, the micro won’t get clocked. If you have power and a running micro, you should see some life someplace. Lots of products include small backup batteries on their boards. See Figure 11-1. These batteries keep the clock running and preserve user preferences. Loss of battery power causes resetting of data to the default states but doesn’t prevent the product from working.


pages: 660 words: 141,595

Data Science for Business: What You Need to Know About Data Mining and Data-Analytic Thinking by Foster Provost, Tom Fawcett

Albert Einstein, Amazon Mechanical Turk, big data - Walmart - Pop Tarts, bioinformatics, business process, call centre, chief data officer, Claude Shannon: information theory, computer vision, conceptual framework, correlation does not imply causation, crowdsourcing, data acquisition, David Brooks, en.wikipedia.org, Erik Brynjolfsson, Gini coefficient, information retrieval, intangible asset, iterative process, Johann Wolfgang von Goethe, Louis Pasteur, Menlo Park, Nate Silver, Netflix Prize, new economy, p-value, pattern recognition, placebo effect, price discrimination, recommendation engine, Ronald Coase, selection bias, Silicon Valley, Skype, speech recognition, Steve Jobs, supply-chain management, text mining, The Signal and the Noise by Nate Silver, Thomas Bayes, transaction costs, WikiLeaks

There is no fixed number, although practitioners tend to have their own preferences based on experience. However, researchers have developed techniques to decide the stopping point statistically. Statistics provides the notion of a “hypothesis test,” which you might recall from a basic statistics class. Roughly, a hypothesis test tries to assess whether a difference in some statistic is not due simply to chance. In most cases, the hypothesis test is based on a “p-value,” which gives a limit on the probability that the difference in statistic is due to chance. If this value is below a threshold (often 5%, but problem specific), then the hypothesis test concludes that the difference is likely not due to chance. So, for stopping tree growth, an alternative to setting a fixed size for the leaves is to conduct a hypothesis test at every leaf to determine whether the observed difference in (say) information gain could have been due to chance.


Not Working by Blanchflower, David G.

active measures, affirmative action, Affordable Care Act / Obamacare, Albert Einstein, bank run, banking crisis, basic income, Berlin Wall, Bernie Madoff, Bernie Sanders, Black Swan, Boris Johnson, business cycle, Capital in the Twenty-First Century by Thomas Piketty, Carmen Reinhart, Clapham omnibus, collective bargaining, correlation does not imply causation, credit crunch, declining real wages, deindustrialization, Donald Trump, estate planning, Fall of the Berlin Wall, full employment, George Akerlof, gig economy, Gini coefficient, Growth in a Time of Debt, illegal immigration, income inequality, indoor plumbing, inflation targeting, job satisfaction, John Bercow, Kenneth Rogoff, labor-force participation, liquidationism / Banker’s doctrine / the Treasury view, longitudinal study, low skilled workers, manufacturing employment, Mark Zuckerberg, market clearing, Martin Wolf, mass incarceration, meta analysis, meta-analysis, moral hazard, Nate Silver, negative equity, new economy, Northern Rock, obamacare, oil shock, open borders, Own Your Own Home, p-value, Panamax, pension reform, plutocrats, Plutocrats, post-materialism, price stability, prisoner's dilemma, quantitative easing, rent control, Richard Thaler, Robert Shiller, Robert Shiller, Ronald Coase, selection bias, selective serotonin reuptake inhibitor (SSRI), Silicon Valley, South Sea Bubble, Thorstein Veblen, trade liberalization, universal basic income, University of East Anglia, urban planning, working poor, working-age population, yield curve

He says, colorfully, the IYI has been wrong, historically, on “Stalinism, Maoism, GMOs, Iraq, Libya, Syria, lobotomies, urban planning, low carbohydrate diets, gym machines, behaviorism, transfats, Freudianism, portfolio theory, linear regression, Gaussianism, Salafism, dynamic stochastic equilibrium modeling, housing projects, selfish gene, election forecasting models, Bernie Madoff (pre-blowup) and p- values. But he is convinced that his current position is right.” He doesn’t mean me of course! 24. Letter to the Queen from the British Academy signed by 33 economists including 9 members, ex-and future, of the MPC and civil servants: http://www.feed-charity.org/user/image/besley-hennessy2009a.pdf. 25. Charlie Bean, “Measuring Recession and Recovery: An Economic Perspective” (Speech at RSS Statistics User Forum Conference, October 27, 2010). 26.


pages: 513 words: 152,381

The Precipice: Existential Risk and the Future of Humanity by Toby Ord

3D printing, agricultural Revolution, Albert Einstein, artificial general intelligence, Asilomar, Asilomar Conference on Recombinant DNA, availability heuristic, Columbian Exchange, computer vision, cosmological constant, cuban missile crisis, decarbonisation, defense in depth, delayed gratification, demographic transition, Doomsday Clock, Drosophila, effective altruism, Elon Musk, Ernest Rutherford, global pandemic, Intergovernmental Panel on Climate Change (IPCC), Isaac Newton, James Watt: steam engine, Mark Zuckerberg, mass immigration, meta analysis, meta-analysis, Mikhail Gorbachev, mutually assured destruction, Nash equilibrium, Norbert Wiener, nuclear winter, p-value, Peter Singer: altruism, planetary scale, race to the bottom, RAND corporation, Ronald Reagan, self-driving car, Stanislav Petrov, Stephen Hawking, Steven Pinker, Stewart Brand, supervolcano, survivorship bias, the scientific method, uranium enrichment

We can also use our survival so far to make an upper bound for the total natural extinction risk. For example, if the risk were above 0.34 percent per century there would have been a 99.9 percent chance of going extinct before now.60 We thus say that risk above 0.34 percent per century is ruled out at the 99.9 percent confidence level—a conclusion that is highly significant by the usual scientific standards (equivalent to a p-value of 0.001).61 So our 2,000 centuries of Homo sapiens suggests a “best-guess” risk estimate between 0 percent and 0.05 percent, with an upper bound of 0.34 percent. But what if Homo sapiens is not the relevant category? We are interested in the survival of humanity, and we may well see this as something broader than our species. For instance, Neanderthals were very similar to Homo sapiens and while the extent of interbreeding between the two is still debated, it is possible that they are best considered as a subspecies.


God Created the Integers: The Mathematical Breakthroughs That Changed History by Stephen Hawking

Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, Antoine Gombaud: Chevalier de Méré, Augustin-Louis Cauchy, British Empire, Edmond Halley, Eratosthenes, Fellow of the Royal Society, G4S, Georg Cantor, Henri Poincaré, Isaac Newton, Johannes Kepler, John von Neumann, p-value, Pierre-Simon Laplace, Richard Feynman, Stephen Hawking, Turing machine

If the total number of letters is not exhausted, we will take a third index such that am · n · 0, am·n·1, am·n·2, am·n·3, …, am·n·P − 1 are, in general, a system of conjoined letters; and in this way we will reach the conclusion that N = Pα, α being a certain number equal to that of the different indices about which we are concerned. The general form of the letters will be being the indices which can take each of the P values 0, 1, 2, 3, …, P-1. Through the way in which we have proceeded we can also see that, all the substitutions in the group H will be of the form because each index corresponds to a system of conjoined letters. If P is not a prime number, we will reason about the group of permutations of any one of the systems of conjoined letters, as we reasoned about the group G, when we replaced each index by a certain number of new indices, and we will find P = Rα, and so forth; whence, finally, N = pv, p being a prime number.

Now I say that unless this group of linear substitutions does not always belong [appartienne], as we will see, to the equations which are solvable by radicals, it will always enjoy this property that if in any one of its substitutions there are n letters which are fixed, then n will divide the number of the letters. And, in fact, whatever the number of letters is which remains fixed, we will be able to express this circumstance by linear equations which give all the indices of one of the fixed letters, by means of a certain number of those among them. Giving p values to each of these indices as they remain arbitrary, we will have pm systems of values, m being a certain number. In the case with which we are concerned, m is necessarily < 2, and, consequently, is found to be between 0 and 1. Therefore the number of substitutions is known to be no greater than p2(p − 1) (p2 − p). Now let us consider only the linear substitutions where the letter a0·0 does not vary; if, in this case, we find the total number of permutations of a group which contains all the possible linear substitutions, it will be sufficient to multiply this number by p2.

As a lemma in the theory of primitive equations solvable by radicals, I made in June 1830, in the Bulletin de Férussac an analysis of imaginaries in the theory of numbers. There will be found herewith1 the proof of the following theorems: 1°. In order that a primitive equation be solvable by radicals its degree must be pv, p being a prime. 2°. All the permutations of such an equation have the form xk, l, m,. . . | xak + bl + cm + . . . + h, a′k + b′l + c′m + . . . + h′, a″k + . . ., k, l, m, . . . being v indices, which, taking p values each, denote all the roots. The indices are taken with respect to a modulus p; that is to say, the root will be the same if we add a multiple of p to one of the indices. The group which is obtained on applying all the substitutions of this linear form contains in all Pv(pv − 1)(pv − p) . . . (pv − pv − 1) permutations. It happens that in general the equations to which they belong are not solvable by radicals.


pages: 1,606 words: 168,061

Python Cookbook by David Beazley, Brian K. Jones

don't repeat yourself, Firefox, Guido van Rossum, iterative process, p-value, web application

Solution The ctypes module can be used to create Python callables that wrap around arbitrary memory addresses. The following example shows how to obtain the raw, low-level address of a C function and how to turn it back into a callable object: >>> import ctypes >>> lib = ctypes.cdll.LoadLibrary(None) >>> # Get the address of sin() from the C math library >>> addr = ctypes.cast(lib.sin, ctypes.c_void_p).value >>> addr 140735505915760 >>> # Turn the address into a callable function >>> functype = ctypes.CFUNCTYPE(ctypes.c_double, ctypes.c_double) >>> func = functype(addr) >>> func <CFunctionType object at 0x1006816d0> >>> # Call the resulting function >>> func(2) 0.9092974268256817 >>> func(0) 0.0 >>> Discussion To make a callable, you must first create a CFUNCTYPE instance. The first argument to CFUNCTYPE() is the return type.


pages: 733 words: 179,391

Adaptive Markets: Financial Evolution at the Speed of Thought by Andrew W. Lo

"Robert Solow", Albert Einstein, Alfred Russel Wallace, algorithmic trading, Andrei Shleifer, Arthur Eddington, Asian financial crisis, asset allocation, asset-backed security, backtesting, bank run, barriers to entry, Berlin Wall, Bernie Madoff, bitcoin, Bonfire of the Vanities, bonus culture, break the buck, Brownian motion, business cycle, business process, butterfly effect, buy and hold, capital asset pricing model, Captain Sullenberger Hudson, Carmen Reinhart, collapse of Lehman Brothers, collateralized debt obligation, commoditize, computerized trading, corporate governance, creative destruction, Credit Default Swap, credit default swaps / collateralized debt obligations, cryptocurrency, Daniel Kahneman / Amos Tversky, delayed gratification, Diane Coyle, diversification, diversified portfolio, double helix, easy for humans, difficult for computers, Ernest Rutherford, Eugene Fama: efficient market hypothesis, experimental economics, experimental subject, Fall of the Berlin Wall, financial deregulation, financial innovation, financial intermediation, fixed income, Flash crash, Fractional reserve banking, framing effect, Gordon Gekko, greed is good, Hans Rosling, Henri Poincaré, high net worth, housing crisis, incomplete markets, index fund, interest rate derivative, invention of the telegraph, Isaac Newton, James Watt: steam engine, job satisfaction, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Meriwether, Joseph Schumpeter, Kenneth Rogoff, London Interbank Offered Rate, Long Term Capital Management, longitudinal study, loss aversion, Louis Pasteur, mandelbrot fractal, margin call, Mark Zuckerberg, market fundamentalism, martingale, merger arbitrage, meta analysis, meta-analysis, Milgram experiment, money market fund, moral hazard, Myron Scholes, Nick Leeson, old-boy network, out of africa, p-value, paper trading, passive investing, Paul Lévy, Paul Samuelson, Ponzi scheme, predatory finance, prediction markets, price discovery process, profit maximization, profit motive, quantitative hedge fund, quantitative trading / quantitative finance, RAND corporation, random walk, randomized controlled trial, Renaissance Technologies, Richard Feynman, Richard Feynman: Challenger O-ring, risk tolerance, Robert Shiller, Robert Shiller, Sam Peltzman, Shai Danziger, short selling, sovereign wealth fund, Stanford marshmallow experiment, Stanford prison experiment, statistical arbitrage, Steven Pinker, stochastic process, stocks for the long run, survivorship bias, Thales and the olive presses, The Great Moderation, the scientific method, The Wealth of Nations by Adam Smith, The Wisdom of Crowds, theory of mind, Thomas Malthus, Thorstein Veblen, Tobin tax, too big to fail, transaction costs, Triangle Shirtwaist Factory, ultimatum game, Upton Sinclair, US Airways Flight 1549, Walter Mischel, Watson beat the top human players on Jeopardy!, WikiLeaks, Yogi Berra, zero-sum game

More precisely, the variance of random walk increments is linear in the time interval of the increment. See Lo and MacKinlay (1988) for details. 4. Of course, the expected payoff of most investments also increases with the investment horizon, enough to entice many to be long-term investors. We’ll come back to this issue later in chapter 8 when we explore the strange world of hedge funds, but for now let’s focus on the variance. 5. The p-value of a z-score of 7.51 is 2.9564×10−14. Th is result was based on an equally weighted index of all stocks traded on the New York, American, and NASDAQ stock exchanges during our sample. When we applied our test to a value-weighted version of that stock index—one where larger stocks received proportionally greater weight—the rejection was less dramatic but still compelling: the odds of the Random Walk Hypothesis in this case were slightly less than 1 out of 100. 6.


pages: 968 words: 224,513

The Art of Assembly Language by Randall Hyde

Donald Knuth, P = NP, p-value, sorting algorithm, Von Neumann architecture, Y2K

For example: static p:procedure( i:int32; c:char ) := &SomeProcedure; Note that SomeProcedure must be a procedure whose parameter list exactly matches p's parameter list (i.e., two value parameters, the first is an int32 parameter and the second is a char parameter). To indirectly call this procedure, you could use either of the following sequences: push( Value_for_i ); push( Value_for_c ); call( p ); or p( Value_for_i, Value_for_c ); The high-level language syntax has the same features and restrictions as the high-level syntax for a direct procedure call. The only difference is the actual call instruction HLA emits at the end of the calling sequence. Although all the examples in this section use static variable declarations, don't get the idea that you can declare simple procedure pointers only in the static or other variable declaration sections.


pages: 892 words: 91,000

Valuation: Measuring and Managing the Value of Companies by Tim Koller, McKinsey, Company Inc., Marc Goedhart, David Wessels, Barbara Schwimmer, Franziska Manoury

activist fund / activist shareholder / activist investor, air freight, barriers to entry, Basel III, BRICs, business climate, business cycle, business process, capital asset pricing model, capital controls, Chuck Templeton: OpenTable:, cloud computing, commoditize, compound rate of return, conceptual framework, corporate governance, corporate social responsibility, creative destruction, credit crunch, Credit Default Swap, discounted cash flows, distributed generation, diversified portfolio, energy security, equity premium, fixed income, index fund, intangible asset, iterative process, Long Term Capital Management, market bubble, market friction, Myron Scholes, negative equity, new economy, p-value, performance metric, Ponzi scheme, price anchoring, purchasing power parity, quantitative easing, risk/return, Robert Shiller, Robert Shiller, shareholder value, six sigma, sovereign wealth fund, speech recognition, stocks for the long run, survivorship bias, technology bubble, time value of money, too big to fail, transaction costs, transfer pricing, value at risk, yield curve, zero-coupon bond

At the end of the research phase, there are three possible outcomes: success combined with an increase in the value of a marketable drug to $5,594 million, success combined with a decrease 27 The formula for estimating the upward probability is: (1 + k)T − d 1.073 − 0.77 = = 0.86 u−d 1.30 − 0.77 where k is the expected return on the asset. 816 FLEXIBILITY EXHIBIT 35.18 Decision Tree: R&D Option with Technological and Commercial Risk $ million Technological risk event Commercial risk event Research phase TTesting phase Marketing VValue up 74% NPV = 7,104 1 – q* = 26% V lue down Va NPV = 4,164 q* = p= Value up V q* = 74% Success 40% NPV = 1,936 1–p= 60% Failure NPV = 0 Success p = 15% 1 – q* = 26% Value down V 1 – p = 85% Failure r VValue up 74% NPV = 4,164 1 – q* = 26% V lue down Va NPV = 2,416 q* = p= NPV = 120 Success 40% NPV = 1,029 1–p= NPV = 0 Decision event 60% Failure NPV = 0 Note: NPV = net present value of project q* = binomial (risk-neutral) probability of an increase in marketable drug value p = probability of technological success in the value of a marketable drug to $3,327 million, and failure leading to a drug value of $0.


The Art of Computer Programming: Sorting and Searching by Donald Ervin Knuth

card file, Claude Shannon: information theory, complexity theory, correlation coefficient, Donald Knuth, double entry bookkeeping, Eratosthenes, Fermat's Last Theorem, G4S, information retrieval, iterative process, John von Neumann, linked data, locality of reference, Menlo Park, Norbert Wiener, NP-complete, p-value, Paul Erdős, RAND corporation, refrigerator car, sorting algorithm, Vilfredo Pareto, Yogi Berra, Zipf's Law

In fact, it is easy to see from A) that C) en dn Cn bn — Gn-1) = an_1+en_1 = an-i + dn-i = an_i +cn_i = an-i +<2n-2, = an_i +an_2 = an-l +«n-2 O"n = «n-l + bn-l = fln-1 + &n-2 + &n-3 + an-4 + & where a0 = 1 and where we let an = 0 for n = -1, -2, -3, -4. 270 SORTING 5.4.2 The pth-order Fibonacci numbers Fn are defined by the rules = F^\ + F^2 + ¦¦¦ + i#?p, for „ > p; = 0, for 0 < n < p - 2; F^\ = 1. In other words, we start with p—l 0s, then 1, and then each number is the sum of the preceding p values. When p = 2, this is the usual Fibonacci sequence; for larger values of p the sequence was apparently first studied by V. Schlegel in El Progreso Matematico 4 A894), 173-174. Schlegel derived the generating function F(p)zn = fZlZP~1 ZP n = l-z-z2 zP 1-2Z + n>0 The last equation of C) shows that the number of runs on Tl during a six-tape polyphase merge is a fifth-order Fibonacci number: an = -f^+V In general, if we set P = T— 1, the polyphase merge distributions for T tapes will correspond to Pth order Fibonacci numbers in the same way.


UNIX® Network Programming, Volume 1: The Sockets Networking API, 3rd Edition by W. Richard Stevens, Bill Fenner, Andrew M. Rudoff

failed state, fudge factor, information retrieval, p-value, RFC: Request For Comment, Richard Stallman, web application

As we mentioned earlier, group addresses are recognized and handled specially by receiving interfaces. flags: 0 0 P T IPv6 multicast address 32-bit group ID ff 80 bits of zero 4-bit scope 4-bit flags: 0 0 0 T unicastbased IPv6 multicast address 64-bit prefix plen ff 32-bit group ID 4-bit scope 4-bit flags: 0 0 1 1 Figure 21.2 Format of IPv6 multicast addresses Two formats are defined for IPv6 multicast addresses, as shown in Figure 21.2. When the P flag is 0, the T flag differentiates between a well-known multicast group (a value of 0) and a transient multicast group (a value of 1). A P value of 1 designates a multicast address that is assigned based on a unicast prefix (defined in RFC 3306 [Haberman and Thaler 2002]). If the P flag is 1, the T flag also must be 1 (i.e., unicastbased multicast addresses are always transient), and the plen and prefix fields are set to the prefix length and value of the unicast prefix, respectively. The upper two bits of this 552 Multicasting Chapter 21 field are reserved.