p-value

55 results back to index


pages: 719 words: 104,316

R Cookbook by Paul Teetor

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Debian, en.wikipedia.org, p-value, quantitative trading / quantitative finance, statistical model

However, it produces an annoying warning message, shown here at the bottom of the output, when the p-value is below 0.01: > library(tseries) > adf.test(x) Augmented Dickey-Fuller Test data: x Dickey-Fuller = -4.3188, Lag order = 4, p-value = 0.01 alternative hypothesis: stationary Warning message: In adf.test(x) : p-value smaller than printed p-value Fortunately, I can muzzle the function by calling it inside suppressWarnings(...): > suppressWarnings(adf.test(x)) Augmented Dickey-Fuller Test data: x Dickey-Fuller = -4.3188, Lag order = 4, p-value = 0.01 alternative hypothesis: stationary Notice that the warning message disappeared. The message is not entirely lost because R retains it internally. I can retrieve the message at my leisure by using the warnings function: > warnings() Warning message: In adf.test(x) : p-value smaller than printed p-value Some functions also produce “messages” (in R terminology), which are even more benign than warnings.

Solution Use the table function to produce a contingency table from the two factors. Then use the summary function to perform a chi-squared test of the contingency table: > summary(table(fac1,fac2)) The output includes a p-value. Conventionally, a p-value of less than 0.05 indicates that the variables are likely not independent whereas a p-value exceeding 0.05 fails to provide any such evidence. Discussion This example performs a chi-squared test on the contingency table of Recipe 9.3 and yields a p-value of 0.01255: > summary(table(initial,outcome)) Number of cases in table: 100 Number of factors: 2 Test for independence of all factors: Chisq = 8.757, df = 2, p-value = 0.01255 The small p-value indicates that the two factors, initial and outcome, are probably not independent. Practically speaking, we conclude there is some connection between the variables.

Do you notice the extreme righthand column containing double asterisks (**), a single asterisk (*), and a period(.)? That column highlights the significant variables. The line labeled "Signif. codes" at the bottom gives a cryptic guide to the flags’ meanings: *** p-value between 0 and 0.001 ** p-value between 0.001 and 0.01 * p-value between 0.01 and 0.05 . p-value between 0.05 and 0.1 (blank) p-value between 0.1 and 1.0 The column labeled Std. Error is the standard error of the estimated coefficient. The column labeled t value is the t statistic from which the p-value was calculated. Residual standard error Residual standard error: 1.625 on 26 degrees of freedom This reports the standard error of the residuals (σ)—that is, the sample standard deviation of ε. R2 (coefficient of determination) Multiple R-squared: 0.4981, Adjusted R-squared: 0.4402 R2 is a measure of the model’s quality.

Beginning R: The Statistical Programming Language by Mark Gardener

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

correlation coefficient, distributed generation, natural language processing, New Urbanism, p-value, statistical model

You might prefer to display the values as whole numbers and you can adjust the output “on the fly” by using the round() command to choose how many decimal points to display the values like so: > round(bird.cs$exp, 0) Garden Hedgerow Parkland Pasture Woodland Blackbird 60 11 24 4 2 Chaffinch 17 3 7 1 1 Great Tit 40 7 16 3 1 House Sparrow 44 8 17 3 2 Robin 8 2 3 1 0 Song Thrush 6 1 2 0 0 In this instance you chose to use no decimals at all and so use 0 as an instruction in the round() command. Monte Carlo Simulation You can decide to determine the p-value by a slightly different method and can use a Monte Carlo simulation to do this. You add an extra instruction to the chisq.test() command, simulate.p.value = TRUE, like so: > chisq.test(bird.df, simulate.p.value = TRUE, B = 2500) Pearson's Chi-squared test with simulated p-value (based on 2500 replicates) data: bird.df X-squared = 78.2736, df = NA, p-value = 0.0003998 The default is that simulate.p.value = FALSE and that B = 2000. The latter is the number of replicates to use in the Monte Carlo test, which is set to 2500 for this example. Yates’ Correction for 2 × 2 Tables When you have a 2 × 2 contingency table it is common to apply the Yates’ correction.

Now run the chi-squared test again but this time use a Monte Carlo simulation with 3000 replicates to determine the p-value: > (bees.cs = chisq.test(bees, simulate.p.value = TRUE, B = 3000)) Pearson's Chi-squared test with simulated p-value (based on 3000 replicates) data: bees X-squared = 120.6531, df = NA, p-value = 0.0003332 4. Look at a portion of the data as a 2 × 2 contingency table. Examine the effect of Yates’ correction on this subset: > bees[1:2, 4:5] Honey.bee Carder.bee Thistle 12 8 Vipers.bugloss 13 27 > chisq.test(bees[1:2, 4:5], correct = FALSE) Pearson's Chi-squared test data: bees[1:2, 4:5] X-squared = 4.1486, df = 1, p-value = 0.04167 > chisq.test(bees[1:2, 4:5], correct = TRUE) Pearson's Chi-squared test with Yates' continuity correction data: bees[1:2, 4:5] X-squared = 3.0943, df = 1, p-value = 0.07857 5. Look at the last two columns, representing two bee species. Carry out a goodness of fit test to determine if the proportions of visits are the same: > with(bees, chisq.test(Honey.bee, p = Carder.bee, rescale = T)) Chi-squared test for given probabilities data: Honey.bee X-squared = 58.088, df = 4, p-value = 7.313e-12 Warning message: In chisq.test(Honey.bee, p = Carder.bee, rescale = T) : Chi-squared approximation may be incorrect 6.

Carry out a goodness of fit test to determine if the proportions of visits are the same: > with(bees, chisq.test(Honey.bee, p = Carder.bee, rescale = T)) Chi-squared test for given probabilities data: Honey.bee X-squared = 58.088, df = 4, p-value = 7.313e-12 Warning message: In chisq.test(Honey.bee, p = Carder.bee, rescale = T) : Chi-squared approximation may be incorrect 6. Carry out the same goodness of fit test but use a simulation to determine the p-value (you can abbreviate the command): > with(bees, chisq.test(Honey.bee, p = Carder.bee, rescale = T, sim = T)) Chi-squared test for given probabilities with simulated p-value (based on 2000 replicates) data: Honey.bee X-squared = 58.088, df = NA, p-value = 0.0004998 7. Now look at a single column and carry out a goodness of fit test. This time omit the p = instruction to test the fit to equal probabilities: > chisq.test(bees$Honey.bee) Chi-squared test for given probabilities data: bees$Honey.bee X-squared = 2.5, df = 4, p-value = 0.6446 How It Works The basic form of the chisq.test() command will operate on a matrix or data frame.


pages: 443 words: 51,804

Handbook of Modeling High-Frequency Data in Finance by Frederi G. Viens, Maria C. Mariani, Ionut Florescu

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

algorithmic trading, asset allocation, automated trading system, backtesting, Black-Scholes formula, Brownian motion, business process, continuous integration, corporate governance, discrete time, distributed generation, fixed income, Flash crash, housing crisis, implied volatility, incomplete markets, linear programming, mandelbrot fractal, market friction, market microstructure, martingale, Menlo Park, p-value, pattern recognition, performance metric, principal–agent problem, random walk, risk tolerance, risk/return, short selling, statistical model, stochastic process, stochastic volatility, transaction costs, value at risk, volatility smile, Wiener process

However, we could clearly see them in the figures obtained using the DFA method. 142 CHAPTER 6 Long Correlations Applied to the Study of Memory EEM:2003–2009 Cumulative distribution Cumulative distribution EEM:2003–2009 100 10–2 10–4 10 –1 0 1 100 10–2 10–4 10–1 100 101 Normalized returns (T = 4, α = 1.60) 10 10 Normalized returns (T = 1, α = 1.50) EEM:2003–2009 Cumulative distribution Cumulative distribution EEM:2003–2009 100 10–2 10–4 10–1 100 101 Normalized returns (T = 8, α = 1.60) 100 10–2 10–4 10–1 100 101 Normalized returns (T = 16, α = 1.60) DFA analysis:EEM 2003–2009 −2 −2.5 log(Fn) −3 −3.5 −4 −4.5 −5 −5.5 1 1.5 2 log(n) 2.5 3 [α = 0.74338] 3.5 4 3.5 4 Hurst analysis:EEM 2003–2009 2.2 2 1.8 1.6 log(R/S) 1.4 1.2 1 0.8 0.6 0.4 0.2 1 1.5 2 2.5 3 log(n) [H = 0.57794] FIGURE 6.5 Analysis results for EEM index using the entire period available. 143 S&P 500: 2001–2009 Cumulative distribution Cumulative distribution 6.4 Results and Discussions 100 10–2 10–4 S&P 500: 2001–2009 100 10–2 10–4 –1 10 0 1 10 10 Normalized returns (T = 8, α = 1.40) Cumulative distribution Cumulative distribution 10–1 100 101 Normalized returns (T = 1, α = 1.55) S&P 500: 2001–2009 100 10–2 10–4 10–1 100 101 Normalized returns (T = 4, α = 1.50) S&P 500: 2001–2009 0 10 10–2 10–4 10–1 100 101 Normalized returns (T = 16, α = 1.40) DFA analysis:S&P 500, 2001–2009 −3.5 log(Fn) −4 −4.5 −5 −5.5 −6 1 1.5 2 2.5 3 log(n) [α = 0.67073] 3.5 4 3.5 4 Hurst analysis: S&P 500, 2001–2009 2 1.8 1.6 log(R/S) 1.4 1.2 1 0.8 0.6 0.4 0.2 1 1.5 2 2.5 3 log(n) [H = 0.56657] FIGURE 6.6 Analysis results for S&P500 index using the entire period available. 144 CHAPTER 6 Long Correlations Applied to the Study of Memory iShares MSCI EAFE Index (EFA) Kohmogorov S.Statistic Normal 99.9 99.9 99 99 95 90 80 70 60 50 40 30 20 10 5 95 90 80 70 60 50 40 30 20 10 5 1 Mean 0.0005256 StDev 0.004476 N 252 AD 0.444 P-Value 0.283 Percent Percent iShares MSCI EAFE Index (EFA) Anderson D.Statistic Normal 1 0.1 0.1 –0.015 –0.010 –0.005 0.000 0.005 0.010 (EFA) 1/3/03 to 1/2/04 0.015 –0.015 –0.010 –0.005 0.000 0.005 0.010 (EFA) 1/3/03 to 1/2/04 99.9 99 99 95 90 80 70 60 50 40 30 20 10 5 95 90 80 70 60 50 40 30 20 10 5 Percent Percent 99.9 Mean 0.0005256 StDev 0.004476 N 252 AD 0.444 P-Value 0.283 1 0.1 –0.015 –0.010 –0.005 0.000 0.005 0.010 0.015 Sp500(827) 1/2/03 to 12/31/03 Kohmogorov S. for Sp500(827) 1/2/2003 until 12/31/2003 Normal RyanJ. for Sp500(827) 1/2/2003 until 12/31/2003 Normal 99.9 99.9 99 99 95 90 80 70 60 50 40 30 20 10 5 95 90 80 70 60 50 40 30 20 10 5 Mean 0.0004035 StDev 0.004663 N 252 RJ 0.995 P-Value 0.094 Percent Percent Mean 0.0004035 0.004663 StDev N 252 0.418 AD 0.327 P-Value 0.1 –0.015 –0.010 –0.005 0.000 0.005 0.010 0.015 (EFA) 1/3/03 to 1/2/04 1 0.015 Anderson D. for Sp500(827) 1/2/2003 until 12/31/2003 Normal Probability Plot of (EFA) 1/3/03 to 1/2/04 Normal–95%CI 1 Mean 0.0005256 StDev 0.004476 N 252 0.055 KS P-Value 0.064 1 0.1 Mean 0.0004035 StDev 0.004663 N 252 KS 0.039 P-Value >0.150 0.1 –0.015 –0.010 –0.005 0.000 0.005 0.010 0.015 –0.015 –0.010 –0.005 0.000 0.005 0.010 0.015 Sp500(827) 1/2/03 to 12/31/03 Sp500(827) 1/2/03 to 12/31/03 MSCI_(EEM) from 4/15/2003 until 12/31/2003 Anderson D. Normal Empirical CDF of EEM 4/15/03 to 12/31/03 Normal 99.9 100 95 90 80 70 60 50 40 30 20 10 5 1 80 Mean 0.001153 StDev 0.004840 N 180 AD 0.272 P-Value 0.668 Percent Percent 99 60 40 20 0 Mean StDev N 0.001153 0.004840 180 0.1 –0.015 –0.010 –0.005 0.000 0.005 0.010 0.015 0.020 EEM 4/15/03 to 12/31/03 –0.010 –0.005 0.000 0.005 0.010 EEM 4/15/03 to 12/31/03 FIGURE 6.7 Several normality tests for 2003 using the three indices. 0.015 145 6.4 Results and Discussions TABLE 6.10 Dow Jones Index and its Components: p-Value of the ADF and PP Tests of Unit Root, H Exponent and α Exponent Calculated Using R/S and DFA Analysis for All Components and Index Symbol Company ADF PP DJI AA AIG AXP BA C CAT DD DIS GE GM HD HON HPQ IBM INTC JNJ JPM KO MCD MMM MO MRK MSFT PFE PG T UTX VZ WMT XOM Dow Jones Industrial Ave ALCOA AMER INTL GROUP AMER EXPRESS BOEING CITIGROUP CATERPILLAR DU PONT E I DE NEM WALT DISNEY-DISNEY GEN ELECTRIC GEN MOTORS HOME DEPOT HONEYWELL INTL HEWLETT PACKARD INTL BUSINESS MACH INTEL CP JOHNSON AND JOHNS DC JP MORGAN CHASE COCA COLA MCDONALDS CP 3M ALTRIA GROUP MERCK MICROSOFT CP PFIZER PROCTER GAMBLE AT&T.

It is worth mentioning that while the stationarity tests reject the presence of the unit root in the characteristic polynomial that does not necessarily mean that the data is stationary, only that the particular type of nonstationarity indicated 1.0 Emp rescaled stock(x) 1.0 Emp stock0(x) 0.8 0.6 0.4 0.2 0.0 –1e 0.8 0.6 0.4 0.2 0.0 –03 –5e–04 0e+00 x 5e–04 1e–03 x FIGURE 6.1 Plot of the empirical CDF of the returns for Stock 1. (a) The image contains the original CDF. (b) The image is the same empirical CDF but rescaled so that the discontinuities are clearly seen. 130 CHAPTER 6 Long Correlations Applied to the Study of Memory TABLE 6.1 DFA and Hurst Analysis Data ADF (p-value) PP (p-value) KPSS (p-value) DFA HURST Stock 1 <0.01 <0.01 >0.1 Stock 2 <0.01 <0.01 >0.1 Stock 3 <0.01 <0.01 >0.1 Stock 4 <0.01 <0.01 >0.1 Stock 5 <0.01 <0.01 >0.1 Stock 6 <0.01 <0.01 >0.1 Stock 7 <0.01 <0.01 >0.1 Stock 8 <0.01 <0.01 >0.1 Stock 9 <0.01 <0.01 >0.1 Stock 10 <0.01 <0.01 0.07686 Stock 11 <0.01 <0.01 >0.1 Stock 12 <0.01 <0.01 >0.1 Stock 13 <0.01 <0.01 >0.1 Stock 14 <0.01 <0.01 >0.1 Stock 15 <0.01 <0.01 >0.1 Stock 16 <0.01 <0.01 >0.1 Stock 17 <0.01 <0.01 >0.1 Stock 18 <0.01 <0.01 0.076 Stock 19 <0.01 <0.01 >0.1 Stock 20 <0.01 <0.01 >0.1 Stock 21 <0.01 <0.01 >0.1 Stock 22 <0.01 <0.01 >0.1 Stock 23 <0.01 <0.01 >0.1 Stock 24 <0.01 <0.01 >0.1 0.525178 0.007037 0.64812 0.01512 0.66368 0.01465 0.66969 0.01506 0.65525 0.02916 0.74206 0.01032 0.50432 0.01212 0.66184 0.01681 0.72729 0.01383 0.79322 0.01158 0.322432 0.007075 0.70352 0.01429 0.74889 0.02081 0.70976 0.01062 0.76746 0.01029 0.62549 0.01554 0.80534 0.02432 0.69134 0.01336 0.678050 0.009018 0.48603 0.01462 0.65553 0.02517 0.70807 0.01081 0.717223 0.009553 0.45403 0.01370 0.561643 0.005423 0.490789 0.006462 0.628440 0.006138 0.644534 0.005527 0.65044 0.02908 0.722893 0.008662 0.644820 0.008521 0.38046 0.01673 0.635075 0.006374 0.654970 0.006413 0.52485 0.01265 0.596178 0.007172 0.58279 0.00825 0.578053 0.007177 0.588555 0.004527 0.61023 0.01083 0.591336 0.006912 0.596003 0.001927 0.596190 0.005278 0.59426 0.01829 0.50115 0.01086 0.552367 0.009506 0.594051 0.006709 0.37129 0.02267 131 6.3 Data TABLE 6.1 (Continued) Data ADF (p-value) PP (p-value) KPSS (p-value) DFA HURST Stock 25 <0.01 <0.01 0.02718 <0.01 <0.01 >0.1 0.63043 0.01200 0.59568 0.01464 0.646725 0.005784 0.51591 0.01586 Stock 26 Abbreviations: ADF, augmented Dickey–Fuller test for unit-root stationarity; PP, Phillips–Perron unit-root test; KPSS, Kwiatkowski–Phillips–Schmidt–ShinKwiatkowski–Phillips–Schmidt–Shin test for unit-root Stationarity; DFA, detrended fluctuation analysis; Hurst, rescale range analysis.

(b) The image is the same empirical CDF but rescaled so that the discontinuities are clearly seen. 130 CHAPTER 6 Long Correlations Applied to the Study of Memory TABLE 6.1 DFA and Hurst Analysis Data ADF (p-value) PP (p-value) KPSS (p-value) DFA HURST Stock 1 <0.01 <0.01 >0.1 Stock 2 <0.01 <0.01 >0.1 Stock 3 <0.01 <0.01 >0.1 Stock 4 <0.01 <0.01 >0.1 Stock 5 <0.01 <0.01 >0.1 Stock 6 <0.01 <0.01 >0.1 Stock 7 <0.01 <0.01 >0.1 Stock 8 <0.01 <0.01 >0.1 Stock 9 <0.01 <0.01 >0.1 Stock 10 <0.01 <0.01 0.07686 Stock 11 <0.01 <0.01 >0.1 Stock 12 <0.01 <0.01 >0.1 Stock 13 <0.01 <0.01 >0.1 Stock 14 <0.01 <0.01 >0.1 Stock 15 <0.01 <0.01 >0.1 Stock 16 <0.01 <0.01 >0.1 Stock 17 <0.01 <0.01 >0.1 Stock 18 <0.01 <0.01 0.076 Stock 19 <0.01 <0.01 >0.1 Stock 20 <0.01 <0.01 >0.1 Stock 21 <0.01 <0.01 >0.1 Stock 22 <0.01 <0.01 >0.1 Stock 23 <0.01 <0.01 >0.1 Stock 24 <0.01 <0.01 >0.1 0.525178 0.007037 0.64812 0.01512 0.66368 0.01465 0.66969 0.01506 0.65525 0.02916 0.74206 0.01032 0.50432 0.01212 0.66184 0.01681 0.72729 0.01383 0.79322 0.01158 0.322432 0.007075 0.70352 0.01429 0.74889 0.02081 0.70976 0.01062 0.76746 0.01029 0.62549 0.01554 0.80534 0.02432 0.69134 0.01336 0.678050 0.009018 0.48603 0.01462 0.65553 0.02517 0.70807 0.01081 0.717223 0.009553 0.45403 0.01370 0.561643 0.005423 0.490789 0.006462 0.628440 0.006138 0.644534 0.005527 0.65044 0.02908 0.722893 0.008662 0.644820 0.008521 0.38046 0.01673 0.635075 0.006374 0.654970 0.006413 0.52485 0.01265 0.596178 0.007172 0.58279 0.00825 0.578053 0.007177 0.588555 0.004527 0.61023 0.01083 0.591336 0.006912 0.596003 0.001927 0.596190 0.005278 0.59426 0.01829 0.50115 0.01086 0.552367 0.009506 0.594051 0.006709 0.37129 0.02267 131 6.3 Data TABLE 6.1 (Continued) Data ADF (p-value) PP (p-value) KPSS (p-value) DFA HURST Stock 25 <0.01 <0.01 0.02718 <0.01 <0.01 >0.1 0.63043 0.01200 0.59568 0.01464 0.646725 0.005784 0.51591 0.01586 Stock 26 Abbreviations: ADF, augmented Dickey–Fuller test for unit-root stationarity; PP, Phillips–Perron unit-root test; KPSS, Kwiatkowski–Phillips–Schmidt–ShinKwiatkowski–Phillips–Schmidt–Shin test for unit-root Stationarity; DFA, detrended fluctuation analysis; Hurst, rescale range analysis.


pages: 589 words: 69,193

Mastering Pandas by Femi Anthony

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Amazon Web Services, Bayesian statistics, correlation coefficient, correlation does not imply causation, Debian, en.wikipedia.org, Internet of things, natural language processing, p-value, random walk, side project, statistical model, Thomas Bayes

In more formal terms, we would normally define a threshold or alpha value and reject the null hypothesis if the p-value ≤ α or fail to reject otherwise. The typical values for α are 0.05 or 0.01. Following list explains the different values of alpha: p-value <0.01: There is VERY strong evidence against H0 0.01 < p-value < 0.05: There is strong evidence against H0 0.05 < p-value < 0.1: There is weak evidence against H0 p-value > 0.1: There is little or no evidence against H0 Therefore, in this case, we would reject the null hypothesis and give credence to Intelligenza's claim and state that their claim is highly significant. The evidence against the null hypothesis in this case is significant. There are two methods that we use to determine whether to reject the null hypothesis: The p-value approach The rejection region approach The approach that we used in the preceding example was the latter one.

The alpha and p-values In order to conduct an experiment to decide for or against our null hypothesis, we need to come up with an approach that will enable us to make the decision in a concrete and measurable way. To do this test of significance, we have to consider two numbers—the p-value of the test statistic and the threshold level of significance, which is also known as alpha. The p-value is the probability if the result we observe by assuming that the null hypothesis is true or it occurred by occurred by chance alone. The p-value can also be thought of as the probability of obtaining a test statistic as extreme as or more extreme than the actual obtained test statistic, given that the null hypothesis is true. The alpha value is the threshold value against which we compare p-values. This gives us a cut-off point in order to accept or reject the null hypothesis.

In general, the rule is as follows: If the p-value is less than or equal to alpha (p< .05), then we reject the null hypothesis and state that the result is statistically significant. If the p-value is greater than alpha (p > .05), then we have failed to reject the null hypothesis, and we say that the result is not statistically significant. The seemingly arbitrary values of alpha in usage are one of the shortcomings of the frequentist methodology, and there are many questions concerning this approach. The following article in the Nature journal highlights some of the problems: http://www.nature.com/news/scientific-method-statistical-errors-1.14700. For more details on this topic, refer to: http://statistics.about.com/od/Inferential-Statistics/a/What-Is-The-Difference-Between-Alpha-And-P-Values.htm http://bit.ly/1GzYX1P http://en.wikipedia.org/wiki/P-value Type I and Type II errors There are two type of errors, as explained here: Type I Error: In this type of error, we reject H0 when in fact H0 is true.


pages: 579 words: 76,657

Data Science from Scratch: First Principles with Python by Joel Grus

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

correlation does not imply causation, natural language processing, Netflix Prize, p-value, Paul Graham, recommendation engine, SpamAssassin, statistical model

One way to convince yourself that this is a sensible estimate is with a simulation: extreme_value_count = 0 for _ in range(100000): num_heads = sum(1 if random.random() < 0.5 else 0 # count # of heads for _ in range(1000)) # in 1000 flips if num_heads >= 530 or num_heads <= 470: # and count how often extreme_value_count += 1 # the # is 'extreme' print extreme_value_count / 100000 # 0.062 Since the p-value is greater than our 5% significance, we don’t reject the null. If we instead saw 532 heads, the p-value would be: two_sided_p_value(531.5, mu_0, sigma_0) # 0.0463 which is smaller than the 5% significance, which means we would reject the null. It’s the exact same test as before. It’s just a different way of approaching the statistics. Similarly, we would have: upper_p_value = normal_probability_above lower_p_value = normal_probability_below For our one-sided test, if we saw 525 heads we would compute: upper_p_value(524.5, mu_0, sigma_0) # 0.061 which means we wouldn’t reject the null. If we saw 527 heads, the computation would be: upper_p_value(526.5, mu_0, sigma_0) # 0.047 and we would reject the null. Warning Make sure your data is roughly normally distributed before using normal_probability_above to compute p-values.

In a situation like this, where n is much larger than k, we can use normal_cdf and still feel good about ourselves: def p_value(beta_hat_j, sigma_hat_j): if beta_hat_j > 0: # if the coefficient is positive, we need to compute twice the # probability of seeing an even *larger* value return 2 * (1 - normal_cdf(beta_hat_j / sigma_hat_j)) else: # otherwise twice the probability of seeing a *smaller* value return 2 * normal_cdf(beta_hat_j / sigma_hat_j) p_value(30.63, 1.174) # ~0 (constant term) p_value(0.972, 0.079) # ~0 (num_friends) p_value(-1.868, 0.131) # ~0 (work_hours) p_value(0.911, 0.990) # 0.36 (phd) (In a situation not like this, we would probably be using statistical software that knows how to compute the t-distribution, as well as how to compute the exact standard errors.) While most of the coefficients have very small p-values (suggesting that they are indeed nonzero), the coefficient for “PhD” is not “significantly” different from zero, which makes it likely that the coefficient for “PhD” is random rather than meaningful. In more elaborate regression scenarios, you sometimes want to test more elaborate hypotheses about the data, such as “at least one of the is non-zero” or “ equals and equals ,” which you can do with an F-test, which, alas, falls outside the scope of this book.

So a 5%-significance test involves using normal_probability_below to find the cutoff below which 95% of the probability lies: hi = normal_upper_bound(0.95, mu_0, sigma_0) # is 526 (< 531, since we need more probability in the upper tail) type_2_probability = normal_probability_below(hi, mu_1, sigma_1) power = 1 - type_2_probability # 0.936 This is a more powerful test, since it no longer rejects when X is below 469 (which is very unlikely to happen if is true) and instead rejects when X is between 526 and 531 (which is somewhat likely to happen if is true). === p-values An alternative way of thinking about the preceding test involves p-values. Instead of choosing bounds based on some probability cutoff, we compute the probability — assuming is true — that we would see a value at least as extreme as the one we actually observed. For our two-sided test of whether the coin is fair, we compute: def two_sided_p_value(x, mu=0, sigma=1): if x >= mu: # if x is greater than the mean, the tail is what's greater than x return 2 * normal_probability_above(x, mu, sigma) else: # if x is less than the mean, the tail is what's less than x return 2 * normal_probability_below(x, mu, sigma) If we were to see 530 heads, we would compute: two_sided_p_value(529.5, mu_0, sigma_0) # 0.062 Note Why did we use 529.5 instead of 530?

Evidence-Based Technical Analysis: Applying the Scientific Method and Statistical Inference to Trading Signals by David Aronson

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, Andrew Wiles, asset allocation, availability heuristic, backtesting, Black Swan, capital asset pricing model, cognitive dissonance, compound rate of return, computerized trading, Daniel Kahneman / Amos Tversky, distributed generation, Elliott wave, en.wikipedia.org, feminist movement, hindsight bias, index fund, invention of the telescope, invisible hand, Long Term Capital Management, mental accounting, meta analysis, meta-analysis, p-value, pattern recognition, Paul Samuelson, Ponzi scheme, price anchoring, price stability, quantitative trading / quantitative finance, Ralph Nelson Elliott, random walk, retrograde motion, revision control, risk tolerance, risk-adjusted returns, riskless arbitrage, Robert Shiller, Robert Shiller, Sharpe ratio, short selling, source of truth, statistical model, systematic trading, the scientific method, transfer pricing, unbiased observer, yield curve, Yogi Berra

The value 0.10 is the sample statistic’s p-value. This fact is equivalent to saying that if the rule’s true return were zero, there is a 0.10 probability that its return in a back test would attain a value as high as +3.5 percent or higher due to sampling variability (chance). This is illustrated in Figure 5.9. p-value, Statistical Significance, and Rejecting the Null Hypothesis A second name for the p-value of the test statistic is the statistical significance of the test. The smaller the p-value, the more statistically significant the test result. A statistically significant result is one for which the p-value is low enough to warrant a rejection of H0. The smaller the p-value of a test statistic, the more confident we can be that a rejection of the null hypothesis is a correct decision. The p-value can be looked upon as the degree to which the observed value of the test statistic conforms to the null hypothesis (H0).

Said differently, a conditional probability is a probability that is conditional upon some other fact being true. In a hypothesis test, this conditional probability is given the special name p-value. Specifically, it is the probability that the observed value of the test statistic could have occurred conditioned upon (given that) the hypothesis being tested (H0) is true. The smaller the p-value, the greater is our justification for calling into question the truth of H0. If the p-value is less than a threshold, which must be defined before the test is carried out, H0 is rejected and HA accepted. The p-value can also be interpreted as the probability H0 will be erroneously rejected when H0 is in fact true. P-value also has a graphical interpretation. It is equal to the fraction of the sampling distribution’s total area that lies at values equal to and greater than the observed value of the test statistic.

The p-value can be looked upon as the degree to which the observed value of the test statistic conforms to the null hypothesis (H0). Larger p-values mean greater conformity, and smaller values mean less conformity. This is simply another way of saying that the more surprising (improbable) an observation is in relation to a given view of the world (the hypothesis), the more likely it is that world view is false. How small does the p-value need to be to justify a rejection of the H0? This is problem specific and relates to the cost that would be incurred by an erroneous rejection. We will deal with the matter of errors and their costs in a moment. However, there are some standards that Null Hypothesis & Sampling Distribution Mean Return Test Statistic: +3.5% p-value = 0.10 0 Area = 0.10 of total sampling distribution FIGURE 5.9 P-Value: fractional area of sampling distribution greater than +3.5%, conditional probability of +3.5% or more given that H0 is true. 233 Hypothesis Tests and Confidence Intervals are commonly used.

Everydata: The Misinformation Hidden in the Little Data You Consume Every Day by John H. Johnson

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Affordable Care Act / Obamacare, Black Swan, business intelligence, Carmen Reinhart, cognitive bias, correlation does not imply causation, Daniel Kahneman / Amos Tversky, Donald Trump, en.wikipedia.org, Kenneth Rogoff, labor-force participation, lake wobegon effect, Long Term Capital Management, Mercator projection, Mercator projection distort size, especially Greenland and Africa, meta analysis, meta-analysis, Nate Silver, obamacare, p-value, PageRank, pattern recognition, publication bias, QR code, randomized controlled trial, risk-adjusted returns, Ronald Reagan, selection bias, statistical model, The Signal and the Noise by Nate Silver, Thomas Bayes, Tim Cook: Apple, wikimedia commons, Yogi Berra

It’s a measure of how probable it is that the effect we’re seeing is real (rather than due to chance occurrence), which is why it’s typically measured with a p‑value. P, in this case, stands for probability. If you accept p‑values as a measure of statistical significance, then the lower your p‑value is, the less likely it is that the results you’re seeing are due to chance alone.17 One ­oft-​­accepted measure of statistical significance is a p‑value of less than .05 (which equates to 5 percent probability). The widespread use of this threshold goes back to the 1920s, when it was popularized by Ronald Fisher, a mathematician who studied the effect of fertilizer on crops, among other things.18 Now, we’re not here to debate whether a p‑value of .05 is an appropriate standard for statistical significance, or even whether p‑values themselves are the right way to determine statistical significance.19 Instead, we’re here to tell you that p‑­values—​­including the .05 ­threshold—​­are the standard in many applications.

The widespread use of this threshold goes back to the 1920s, when it was popularized by Ronald Fisher, a mathematician who studied the effect of fertilizer on crops, among other things.18 Now, we’re not here to debate whether a p‑value of .05 is an appropriate standard for statistical significance, or even whether p‑values themselves are the right way to determine statistical significance.19 Instead, we’re here to tell you that p‑­values—​­including the .05 ­threshold—​­are the standard in many applications. And that’s why they matter to you. Because when you see an article about the latest scientific discovery, it’s quite likely that it has only been accepted by the scientific ­community—​ ­and reported by the ­media—​­because it has a p‑value below .05. It may seem somewhat arbitrary, but, as Derek Daniels, PhD (an associate professor at the University at Buffalo) told us, “having a line allows us to stay objective. If there’s no line, then we make a big deal out of a p‑value of 0.06 when it helps us, and we ignore a p‑value of 0.04 when it hurts us.”20 Take a Deep Breath Now let’s go back to the secondhand smoke study, and see what the research actually ­said—​­that passive smoking “did not statistically significantly increase lung cancer risk.”

., a horse’s statistical odds of winning a race might be 1⁄3, which means it is probable that the horse will win one out of every three races; in betting jargon, the odds are typically the reverse, so this same horse would have 2–1 odds against, which means it has a 2⁄3 chance of losing) Omitted variable—A variable that plays a role in a relationship, but may be overlooked or otherwise not included; omitted variables are one of the primary reasons why correlation doesn’t equal causation Outlier—A particular observation that doesn’t fit; it may be much higher (or lower) than all the other data, or perhaps it just doesn’t fall into the pattern of everything else that you’re seeing P‑hacking—Named after p‑values, p‑hacking is a term for the practice of repeatedly analyzing data, trying to find ways to make nonsignificant results significant P‑value—A way to measure statistical significance; the lower your p‑value is, the less likely it is that the results you’re seeing are due to chance Population—The entire set of data or observations that you want to study and draw inferences about; statisticians rarely have the ability to look at the entire population in a study, although it could be possible with a small, ­well-​­defined group (e.g., the voting habits of all 100 U.S. senators) Prediction—See forecast Prediction error—A way to measure uncertainty in the future, essentially by comparing the predicted results to the actual outcomes, once they occur Prediction interval—The range in which we expect to see the next data point Probabilistic forecast—A forecast where you determine the probability of an outcome (e.g., there is a 30 percent chance of thunderstorms tomorrow) Probability—The likelihood (typically expressed as a percentage, fraction, or decimal) that an outcome will occur Proxy—A factor that you believe is closely related (but not identical) to another difficult-to-measure factor (e.g., IQ is a proxy for innate ability) Random—When an observed pattern is due to chance, rather than some observable process or event Risk—A term that can mean different things to different people; in general, risk takes into account not only the probability of an event, but also the consequences 221158 i-xiv 1-210 r4ga.indd  159 2/8/16  5:58:50 PM 160 Glossary Sample—Part of the full population (e.g., the set of Challenger launches with O‑ring failures) Sample selection—A potential statistical problem that arises when the way a sample has been chosen is directly related to the outcomes one is studying; also, sometimes used to describe the process of determining a sample from a population Sampling error—The uncertainty of not knowing if a sample represents the true value in the population or not Selection bias—A potential concern when a sample is comprised of those who chose to participate, a factor which may bias the results Spurious correlation—A statistical relationship between two factors that has no practical or economic meaning, or one that is driven by an omitted variable (e.g., the relationship between murder rates and ice cream consumption) Statistic—A numeric measure that describes an aspect of the data (e.g., a mean, a median, a mode) Statistical impact—Having a statistically significant effect of some undetermined size Statistical significance—A ­probability-​­based method to determine whether an observed effect is truly present in the data, or just due to random chance Summary statistic—Metric that provides information about one or more aspects of the data; averages and aggregated data are two examples of summary statistics Weighted average—An average calculated by assigning each value a weight (based on the value’s relative importance) 221158 i-xiv 1-210 r4ga.indd  160 2/8/16  5:58:50 PM No t e s Preface 1.


pages: 408 words: 85,118

Python for Finance by Yuxing Yan

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

asset-backed security, business intelligence, capital asset pricing model, constrained optimization, correlation coefficient, distributed generation, diversified portfolio, implied volatility, market microstructure, P = NP, p-value, quantitative trading / quantitative finance, Sharpe ratio, time value of money, value at risk, volatility smile, zero-sum game

Then, we conduct two tests: test whether the mean is 0.5, and test whether the mean is zero: >>>from scipy import stats >>>np.random.seed(1235) >>>x = stats.norm.rvs(size=10000) >>>print("T-value P-value (two-tail)") >>>print(stats.ttest_1samp(x,5.0)) >>>print(stats.ttest_1samp(x,0)) T-value P-value (two-tail) [ 193 ] Statistical Analysis of Time Series (array(-495.266783341032), 0.0) (array(-0.26310321925083124), 0.79247644375164772) >>> For the first test, in which we test whether the time series has a mean of 0.5, we reject the null hypothesis since the T-value is 495.2 and the P-value is 0. For the second test, we accept the null hypothesis since the T-value is close to -0.26 and the P-value is 0.79. In the following program, we test whether the mean daily returns from IBM in 2013 is zero: from scipy import stats from matplotlib.finance import quotes_historical_yahoo ticker='ibm' begdate=(2013,1,1) enddate=(2013,11,9) p=quotes_historical_yahoo(ticker,begdate,enddate,asobject=True, adjusted=True) ret=(p.aclose[1:] - p.aclose[:-1])/p.aclose[:-1] print(' Mean T-value P-value ' ) print(round(mean(ret),5), stats.ttest_1samp(ret,0)) Mean T-value P-value (-0.00024, (array(-0.296271094280657), 0.76730904089713181)) From the previous results, we know that the average daily returns for IBM is 0.0024 percent.

In the following program, we test whether the mean daily returns from IBM in 2013 is zero: from scipy import stats from matplotlib.finance import quotes_historical_yahoo ticker='ibm' begdate=(2013,1,1) enddate=(2013,11,9) p=quotes_historical_yahoo(ticker,begdate,enddate,asobject=True, adjusted=True) ret=(p.aclose[1:] - p.aclose[:-1])/p.aclose[:-1] print(' Mean T-value P-value ' ) print(round(mean(ret),5), stats.ttest_1samp(ret,0)) Mean T-value P-value (-0.00024, (array(-0.296271094280657), 0.76730904089713181)) From the previous results, we know that the average daily returns for IBM is 0.0024 percent. The T-value is -0.29 while the P-value is 0.77. Thus, the mean is statistically not different from zero. Tests of equal means and equal variances Next, we test whether two variances for IBM and DELL in 2013 are equal or not. The function called sp.stats.bartlet performs Bartlett's test for equal variances with a null hypothesis that all input samples are from populations with equal variances. The outputs are T-value and P-value: import scipy as sp from matplotlib.finance import quotes_historical_yahoo begdate=(2013,1,1) enddate=(2013,11,9) def ret_f(ticker,begdate,enddate): [ 194 ] Chapter 8 p = quotes_historical_yahoo(ticker,begdate, enddate,asobject=True,ad justed=True) return((p.open[1:] - p.open[:-1])/p.open[:-1]) y=ret_f('IBM',begdate,enddate) x=ret_f('DELL',begdate,enddate) print(sp.stats.bartlett(x,y)) (5.1377132006045105, 0.023411467035559311) With a T-value of 5.13 and a P-value of 2.3 percent, we conclude that these two stocks will have different variances for their daily stock returns in 2013 if we choose a significant level of 5 percent.

The following is the related Python code: import numpy as np import statsmodels.api as sm import scipy as sp def breusch_pagan_test(y,x): results=sm.OLS(y,x).fit() resid=results.resid [ 356 ] Chapter 12 n=len(resid) sigma2 = sum(resid**2)/n f = resid**2/sigma2 - 1 results2=sm.OLS(f,x).fit() fv=results2.fittedvalues bp=0.5 * sum(fv**2) df=results2.df_model p_value=1-sp.stats.chi.cdf(bp,df) return round(bp,6), df, round(p_value,7) sp.random.seed(12345) n=100 x=[] error1=sp.random.normal(0,1,n) error2=sp.random.normal(0,2,n) for i in range(n): if i%2==1: x.append(1) else: x.append(-1) y1=x+np.array(x)+error1 y2=zeros(n) for i in range(n): if i%2==1: y2[i]=x[i]+error1[i] else: y2[i]=x[i]+error2[i] print ('y1 vs. x (we expect to accept the null hypothesis)') bp=breusch_pagan_test(y1,x) print('BP value, df, p-value') print 'bp =', bp bp=breusch_pagan_test(y2,x) [ 357 ] Volatility Measures and GARCH print ('y2 vs. x (we expect to rject the null hypothesis)') print('BP value, df, p-value') print('bp =', bp) For the result of running regression by using y1 against x, we know that its residual vale would be homogeneous, that is, variance or standard deviation is a constant.

Analysis of Financial Time Series by Ruey S. Tsay

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Asian financial crisis, asset allocation, Bayesian statistics, Black-Scholes formula, Brownian motion, capital asset pricing model, compound rate of return, correlation coefficient, data acquisition, discrete time, frictionless, frictionless market, implied volatility, index arbitrage, Long Term Capital Management, market microstructure, martingale, p-value, pattern recognition, random walk, risk tolerance, short selling, statistical model, stochastic process, stochastic volatility, telemarketer, transaction costs, value at risk, volatility smile, Wiener process, yield curve

The Ljung–Box statistics of the standardized shocks give Q(10) = 13.66 with p-value 0.19, confirming that the mean equation is adequate. However, the Ljung–Box statistics for the squared standardized shocks show Q(10) = 23.83 with p value 0.008. The volatility equation is inadequate at the 5% level. We refine the model by considering an ARCH(2) model and obtain rt = 0.0225 + at . 2 2 σt2 = 0.0113 + 0.226at−1 + 0.108at−2 , (3.12) where the standard errors of the parameters are 0.006, 0.002, 0.135, and 0.094, 2 respectively. The coefficient of at−1 is marginally significant at the 10% level, but 2 that of at−2 is only slightly greater than its standard error. The Ljung–Box statistics for the squared standardized shocks give Q(10) = 8.82 with p value 0.55. Consequently, the fitted ARCH(2) model appears to be adequate.

The two sample ACFs are very close to each other, and they suggest that the serial correlations of monthly IBM stock returns are very small, if any. The sample ACFs are all within their two standard-error limits, indicating that they are not significant at the 5% level. In addition, for the simple returns, the Ljung–Box statistics give Q(5) = 5.4 and Q(10) = 14.1, which correspond to p value of 0.37 and 0.17, respectively, based on chi-squared distributions with 5 and 10 degrees of freedom. For the log returns, we have Q(5) = 5.8 and Q(10) = 13.7 with p value 0.33 and 0.19, respectively. The joint tests confirm that monthly IBM stock returns have no significant serial correlations. Figure 2.2 shows the same for the monthly returns of the value-weighted index from the Center for Research in Security Prices (CRSP), University of Chicago. There are some significant serial correlations at the 5% level for both return series.

If a fitted model is found to be inadequate, it must be refined. Consider the residual series of the fitted AR(3) model for the monthly valueweighted simple returns. We have Q(10) = 15.8 with p value 0.027 based on its asymptotic chi-squared distribution with 7 degrees of freedom. Thus, the null hypothesis of no residual serial correlation in the first 10 lags is rejected at the 5% level, but not at the 1% level. If the model is refined to an AR(5) model, then we have rt = 0.0092 + 0.107rt−1 − 0.001rt−2 − 0.123rt−3 + 0.028rt−4 + 0.069rt−5 + ât , with σ̂a = 0.054. The AR coefficients at lags 1, 3, and 5 are significant at the 5% level. The Ljung–Box statistics give Q(10) = 11.2 with p value 0.048. This model shows some improvements and appears to be marginally adequate at the 5% significance level. The mean of rt based on the refined model is also very close to 0.01, showing that the two models have similar long-term implications. 2.4.3 Forecasting Forecasting is an important application of time series analysis.

Commodity Trading Advisors: Risk, Performance Analysis, and Selection by Greg N. Gregoriou, Vassilios Karavas, François-Serge Lhabitant, Fabrice Douglas Rouah

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Asian financial crisis, asset allocation, backtesting, capital asset pricing model, collateralized debt obligation, commodity trading advisor, compound rate of return, constrained optimization, corporate governance, correlation coefficient, Credit Default Swap, credit default swaps / collateralized debt obligations, discrete time, distributed generation, diversification, diversified portfolio, dividend-yielding stocks, fixed income, high net worth, implied volatility, index arbitrage, index fund, interest rate swap, iterative process, linear programming, London Interbank Offered Rate, Long Term Capital Management, market fundamentalism, merger arbitrage, Mexican peso crisis / tequila crisis, p-value, Pareto efficiency, Ponzi scheme, quantitative trading / quantitative finance, random walk, risk-adjusted returns, risk/return, selection bias, Sharpe ratio, short selling, stochastic process, survivorship bias, systematic trading, technology bubble, transaction costs, value at risk, zero-sum game

−6.7692 −3.9901 −5.1742 −4.8709 −4.2616 −4.1900 −5.3569 −4.3574 −5.3094 −4.7461 ADF Tests CTA Excess Returns: ARMA Models CTA Exc1 CTA Exc1: p-value CTA Exc2 CTA Exc2: p-value CTA Exc3 CTA Exc3: p-value CTA Exc4 CTA Exc4: p-value CTA Exc5 CTA Exc5: p-value CTA Exc6 CTA Exc6: p-value CTA Exc7 CTA Exc7: p-value CTA Exc8 CTA Exc8: p-value CTA Exc9 CTA Exc9: p-value CTA Exc10 CTA Exc10: p-value TABLE 21.4 0.00 0.01 0.11 0.07 0.03 0.12 0.07 0.01 R2 0.20 0.9248 0.16 0.0000 MA(4) B4 376 TABLE 21.5 PROGRAM EVALUATION, SELECTION, AND RETURNS CTA Returns, 2000 to 2003: ARMA Models M CTA3 CTA3: p-value CTA4 CTA4: p-value CTA8 CTA8: p-value 0.0123 0.0895 0.0050 0.0288 0.0120 0.0831 AR(1) A1 AR(2) A2 −0.8042 0.0000 −0.5734 0.0126 −0.7018 0.0000 −0.6546 0.0000 0.0956 0.5748 −0.1482 0.3521 MA(1) A1 0.9994 0.0000 0.8731 0.0000 0.9529 0.0000 MA(2) A2 0.9800 0.0000 R2 0.16 0.04 0.09 there is a significant improvement for CTA #3 and #8 (evidenced by the increased R2).

As shown in Table 21.5, 374 0.0144 0.0006 0.0015 0.2288 0.0141 0.0055 0.0065 0.0000 0.0096 0.0034 0.0138 0.0005 0.0111 0.0000 0.0097 0.0000 0.0160 0.0003 0.0098 0.0006 M −0.8778 0.0000 −0.5106 0.0000 −0.7910 0.0000 −0.1493 0.1632 −0.9215 −0.0000 −0.4447 0.0000 −0.5618 0.0000 1.1231 0.0000 −0.8322 0.0000 −0.4473 −0.0000 −0.9249 0.0000 −0.8566 0.0000 0.9402 0.0000 −0.8977 0.0000 −1.5509 0.0000 −1.3294 0.0000 −0.1479 0.1546 AR(2) A2 AR(1) A1 −0.5811 0.0022 0.7482 0.0000 AR(3) A3 −0.2769 0.0039 AR(4) A4 0.4511 0.0000 0.6638 0.0000 −1.1684 0.0000 −0.8378 0.0000 0.5598 0.0000 0.9801 0.0000 0.9740 0.0000 −0.9814 0.0000 −1.1274 0.0000 1.3508 0.0000 MA(1) B1 0.9344 0.0000 1.0430 0.0000 0.9799 0.0000 −0.6249 0.0000 0.9799 0.0000 MA(2) B2 All ADF tests are at 99 percent confidence level. CTA3 rejects hypothesis of unit root at 90 percent. −4.5596 −5.1140 −5.4682 −4.7019 −4.9529 −4.9350 −5.7926 −3.4275 −5.6161 −5.6629 ADF Tests CTA Returns: ARMA Models CTA1 CTA1: p-value CTA2 CTA2: p-value CTA3 CTA3: p-value CTA4 CTA4: p-value CTA5 CTA5: p-value CTA6 CTA6: p-value CTA7 CTA7: p-value CTA8 CTA8: p-value CTA9 CTA9: p-value CTA10 CTA10: p-value TABLE 21.3 0.1529 0.0000 −0.9581 0.0000 MA(3) B3 0.15 0.20 0.06 0.03 0.01 0.15 0.03 0.06 0.31 0.04 R2 0.34 0.93 1.49 0.20 1.40 0.22 2.25 0.06 3.95 0.01 0.23 0.97 1.95 0.13 0.53 0.66 2.21 0.07 Chow F-Stat p-value 375 0.0059 0.0259 −0.0043 0.0178 0.0051 0.3588 −0.0012 0.7033 0.0025 0.3855 0.0046 0.0473 0.0051 0.0572 0.0014 0.6276 0.0100 0.0000 0.0016 0.2435 M −0.7203 0.0191 −0.7132 0.0000 −0.5293 0.0000 −0.3677 0.0000 1.0716 0.0000 −0.5997 0.0000 −0.7890 0.0039 −0.4560 0.1771 0.5498 0.0000 0.7768 0.0000 AR(1) A1 0.9293 0.0000 −0.5202 0.0004 −0.7877 0.0000 −0.8945 0.0000 −0.7539 0.0000 −0.4724 0.0000 −0.5644 0.0376 AR(2) A2 AR(4) A4 MA(1) B1 MA(2) B2 MA(3) B3 0.7109 0.0262 −0.8592 0.0000 0.5947 0.9800 0.0000 0.0000 −0.4187 0.9617 0.0000 0.0000 −1.2220 0.9638 0.0000 0.0000 −0.7067 0.5722 0.5721 0.9661 0.0000 0.0000 0.0000 0.0000 −0.8271 0.6983 0.0009 0.0053 0.5768 0.0706 0.1356 −0.6643 −0.4929 − 1.0160 −0.4115 0.2974 0.0000 0.0000 0.0000 0.0000 −1.1091 0.3889 0.0000 0.0300 AR(3) A3 All ADF tests are at 99 percent confidence level.

The Spearman correlation coefficients show some ability to detect persistence when large TABLE 3.4 EGR Performance Persistence Results from Monte Carlo Generated Data Sets: No Persistence Present by Restricting a = 1 Data Generation Method Generated Data Subgroups Mean returns top 1/3 middle 1/3 bottom 1/3 top 3 bottom 3 p-values reject-positive z reject-negative z fail to reject test of 2 means reject-positive reject-negative fail to reject 1a 2b 3c 1.25 1.25 1.25 1.25 1.26 1.25 1.25 1.22 1.15 1.19 0.70 0.72 0.68 0.61 0.68 0.021 0.028 0.951 0.041 0.037 0.922 0.041 0.039 0.920 0.026 0.028 0.946 0.032 0.020 0.948 0.032 0.026 0.942 generated using a = 1, b = .5; s = 2. generated using a = 1, b = .5; s = 5, 10, 15, 20. cData generated using a = 1, b = .5, 1, 1.5, 1; s = 5, 10, 15, 20. aData bData 37 Performance of Managed Futures TABLE 3.5 EGR Performance Persistence Results from Monte Carlo Generated Data Sets: Persistence Present by Allowing a to Vary Data Generation Method Generated Data Subgroups Mean returns top 1/3 middle 1/3 bottom 1/3 top 3 bottom 3 p-values reject-positive z reject-negative z fail to reject.000 test of 2 means reject-positive reject-negative fail to reject.000 1a 2b 3c 4d 3.21 1.87 0.80 4.93 −1.60 2.77 2.09 1.41 3.47 1.14 2.57 1.85 1.15 3.26 0.86 1.48 1.30 1.14 1.68 1.06 1.000 0.000 0.000 0.827 0.000 0.173 0.823 0.000 0.177 0.149 0.003 0.848 1.00 0.000 0.000 0.268 0.000 0.732 0.258 0.000 0.742 0.043 0.012 0.945 generated using a = N(1.099,4.99); b = .5, 1, 1.5, 2; s = 2. generated using a = N(1.099,4.99); b = .5; s = 5, 10, 15, 20. cData generated using a = N(1.099,4.99); b = .5, 1, 1.5, 2; s = 5, 10, 15, 20. dData generated using a = N(1.099,1); b = .5, 1, 1.5, 2; s = 5, 10, 15, 20. aData bData differences are found in CTA data.


pages: 284 words: 79,265

The Half-Life of Facts: Why Everything We Know Has an Expiration Date by Samuel Arbesman

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, Alfred Russel Wallace, Amazon Mechanical Turk, Andrew Wiles, bioinformatics, British Empire, Cesare Marchetti: Marchetti’s constant, Chelsea Manning, Clayton Christensen, cognitive bias, cognitive dissonance, conceptual framework, David Brooks, demographic transition, double entry bookkeeping, double helix, Galaxy Zoo, guest worker program, Gödel, Escher, Bach, Ignaz Semmelweis: hand washing, index fund, invention of movable type, Isaac Newton, John Harrison: Longitude, Kevin Kelly, life extension, Marc Andreessen, meta analysis, meta-analysis, Milgram experiment, Nicholas Carr, p-value, Paul Erdős, Pluto: dwarf planet, publication bias, randomized controlled trial, Richard Feynman, Richard Feynman, Rodney Brooks, social graph, social web, text mining, the scientific method, Thomas Kuhn: the structure of scientific revolutions, Thomas Malthus, Tyler Cowen: Great Stagnation

On the other hand, imagine if we had gathered a much larger group and still had the same fractions: Out of 500 left-handers, 300 carried L, while out of 500 right-handers, only 220 were carriers for L. If we ran the exact same test, we get a much lower p-value. Now it’s less than 0.0001. This means that there is less than one hundredth of 1 percent chance that the differences are due to chance alone. The larger the sample we get, the better we can test our questions. The smaller the p-value, the more robust our findings. But to publish a result in a scientific journal, you don’t need a minuscule p-value. In general, you need a p-value less than 0.05 or, sometimes, 0.01. For 0.05, this means that there is a one in twenty probability that the result being reported is in fact not real! Comic strip writer Randall Munroe illustrated some of the failings of this threshold for scientific publication: The comic shows some scientists testing whether jelly beans cause acne.

IF you ever delve a bit below the surface when reading about a scientific result, you will often bump into the term p-value. P-values are an integral part of determining how new knowledge is created. More important, they give us a way of estimating the possibility of error. Anytime a scientist tries to discover something new or validate an exciting and novel hypothesis, she tests it against something else. Specifically, our scientist tests it against a version of the world where the hypothesis would not be true. This state of the world, where our intriguing hypothesis is not true and all that we see is exactly just as boring as we pessimistically expect, is known as the null hypothesis. Whether the world conforms to our exciting hypothesis or not can be determined by p-values. Let’s use an example. Imagine we think that a certain form of a gene—let’s call it L—is more often found in left-handed people than in right-handed people, and is therefore associated with left-handedness.

The science of statistics is designed to answer this question by asking it in a more precise fashion: What is the chance that there actually is an equal frequency of left-handers with L and right-handers with L, but we simply happened to get an uneven batch? We know that when flipping a coin ten times, we don’t necessarily get exactly five heads and five tails. The same is true in the null hypothesis scenario for our L experiment. Enter p-values. Using sophisticated statistical analyses, we can reduce this complicated question to a single number: the p-value. This provides us with the probability that our result, which appears to support our hypothesis, is simply due to chance. For example, using certain assumptions, we can calculate what the p-value is for the above results: 0.16, or 16 percent. What this means is that there is about a one in six chance that this result is simply due to sampling variation (getting a few more L left-handers and a few less L right-handed carriers than we expected, if they are of equal frequency).


pages: 205 words: 20,452

Data Mining in Time Series Databases by Mark Last, Abraham Kandel, Horst Bunke

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

4chan, call centre, computer vision, discrete time, information retrieval, iterative process, NP-complete, p-value, pattern recognition, random walk, sensor fusion, speech recognition, web application

Results of the CD hypothesis testing on the ‘Manufacturing’ database Month 1 2 3 4 5 6 CD XP eMK−1,K eMK−1 ,K−1 d H(95%) 1 − p-value 1 − p-value — 14.10% 11.70% 10.60% 11.90% 6.60% — 12.10% 10.40% 9.10% 10.10% 8.90% — 2.00% 1.30% 1.50% 1.80% 2.30% — 4.80% 3.40% 2.90% 2.80% 2.30% — 58.50% 54.40% 68.60% 78.90% 95.00% — 78.30% 98.80% 76.50% 100.00% 63.10% 99% 100% 100% 78% 1 – p value 60% 95% 76% 80% 79% 63% 69% 58% 54% 40% 20% 0% 2 3 4 month 5 6 CD XP Fig. 2. Summary of implementing the change detection methodology on ‘Manufacturing’ database (1 − p-value). Table 10. XP confidence level of all independent and dependent variables in ‘Manufacturing’ database (1 − p-value). Domain Month 2 Month 3 Month 4 Month 5 Month 6 CAT GRP MRKT Code Duration Time to operate Quantity Customer GRP 18 100% 100% 100% 100% 100% 19 100% 100% 99.8% 99.9% 100% 19 100% 100% 100% 100% 100% 19 100% 100% 100% 100% 100% 15 100% 100% 100% 100% 100% 18 100% 100% 100% 100% 100% Change Detection in Classification Models Induced from Time Series Data 119 According to the change detection methodology, during all six consecutive months there was no significant change in the rules describing the relationships between the candidate and the target variables (which is our main interest).

CD Change XP introduced eM K−1,K eM K−1,K−1 d H(95%) 1 − p-value 1 − p-value No No Yes No No Yes No — 20.30% 32.80% 26.60% 26.30% 18.80% 22.10% — 24.00% 19.80% 24.60% 26.40% 27.20% 22.00% — 3.70% 13.00% 2.00% 0.10% 8.40% 0.10% — 4.30% 3.20% 2.80% 2.50% 2.20% 2.00% — 91.90% 100.00% 88.20% 52.60% 100.00% 53.40% — 92.50% 100.00% 100.00% 99.90% 100.00% 52.80% Change Detection in Classification Models Induced from Time Series Data Table 7. 115 G. Zeira, O. Maimon, M. Last and L. Rokach Confidence level 116 93% 100% 100% 100% 100% 99.8% 100 % 100% 84 % 80% 76 % 60% 53 % 53 % 53 % 40% 20% 0% 2 3 4 5 6 7 period CD XP Fig. 1. Summary of implementing the change detection methodology on an artificially generated time series database (1 − p-value). Table 8. Influence of discarding the detected change (Illustration).

Outcomes of XP by validating the sixth month on the fifth and the first month in ‘Manufacturing’ database (p-value). Metric XP (1 − p-value) CAT GRP MRKT Code Duration Time to Operate Quantity Customer GRP Target 18 100% 19 100% 19 100% 19 100% 15 100% 18 100% 2 98.4% month 1 validated by month 6 100% 100% 100% 100% 100% 100% 100% months 1 to 5 validated by 100% 100% 100% 100% 100% 100% 63.1% domain month 5 validated by month 6 month 6 120 G. Zeira, O. Maimon, M. Last and L. Rokach 95.0% 88.5% 90.0% 85.0% 80.0% 76.1% 75.0% 70.0% months 1 to 5 validated by month 6 month 1 validated by month 6 month 5 validated by month 6 Fig. 3. CD confidence level (1 − p-value) outcomes of validating the sixth month on the fifth and the first month in ‘Manufacturing’ database.


pages: 204 words: 58,565

Keeping Up With the Quants: Your Guide to Understanding and Using Analytics by Thomas H. Davenport, Jinho Kim

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Black-Scholes formula, business intelligence, business process, call centre, computer age, correlation coefficient, correlation does not imply causation, Credit Default Swap, en.wikipedia.org, feminist movement, Florence Nightingale: pie chart, forensic accounting, global supply chain, Hans Rosling, hypertext link, invention of the telescope, inventory management, Jeff Bezos, margin call, Moneyball by Michael Lewis explains big data, Myron Scholes, Netflix Prize, p-value, performance metric, publish or perish, quantitative hedge fund, random walk, Renaissance Technologies, Robert Shiller, Robert Shiller, self-driving car, sentiment analysis, six sigma, Skype, statistical model, supply-chain management, text mining, the scientific method

Rare or unusual data (often represented by a p-value below a specified threshold) is an indication that H0 is false, which constitutes a statistically significant result and support of the alternative hypothesis. Independent variable: A variable whose value is known and used to help predict or explain a dependent variable. For example, if you wish to predict the quality of a vintage wine using various predictors (average growing season temperature, harvest rainfall, winter rainfall, and the age of the vintage), the various predictors would serve as independent variables. Alternative names are explanatory variable, predictor variable, and regressor. p-value: When performing a hypothesis test, the p-value gives the probability of data occurrence under the assumption that H0 is true. Small p-values are an indication of rare or unusual data from H0, which in turn provides support that H0 is actually false (and thus support of the alternative hypothesis).

A value of 5 percent signifies that we need data that occurs less than 5 percent of the time from H0 (if H0 were indeed true) for us to doubt H0 and reject it as being true. In practice, this is often assessed by calculating a p-value; p-values less than alpha are indication that H0 is rejected and the alternative supported. t-test or student’s t-test: A test statistic that tests whether the means of two groups are equal, or whether the mean of one group has a specified value. Type I error or α error: This error occurs when the null hypothesis is true, but it is rejected. In traditional hypothesis testing, one rejects the null hypothesis if the p-value is smaller than the significance level α. So, the probability of incorrectly rejecting a true null hypothesis equals α and thus this error is also called α error. a. For the descriptions in this section, we’ve referred to the pertinent definitions in Wikipedia, Heinz Kohler’s Statistics for Business and Economics (2002), and Dell’s Analytics Cheat Sheet (2012, Tables 6 and 8)

This response would not only have been reassuring to the wife but persuasive to her husband as well. In statistical hypothesis testing, the probability of 0.003 calculated above is called the p-value—the probability of obtaining a test statistic (e.g., Z-value of 2.75 in this case) at least as extreme as the one that was actually observed (a pregnancy that would last at least ten months and five days), assuming that the null hypothesis is true. In this example the null hypothesis (H0) is “This baby is my husband’s.” In traditional hypothesis testing, one rejects the null hypothesis if the p-value is smaller than the significance level. In this case a p-value of 0.003 would result in the rejection of the null hypothesis even at the 1 percent significance level—typically the lowest level anyone uses. Normally, then, we reject the null hypothesis that this baby is the San Diego Reader’s husband’s baby.


pages: 561 words: 120,899

The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant From Two Centuries of Controversy by Sharon Bertsch McGrayne

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Bayesian statistics, bioinformatics, British Empire, Claude Shannon: information theory, Daniel Kahneman / Amos Tversky, double helix, Edmond Halley, Fellow of the Royal Society, full text search, Henri Poincaré, Isaac Newton, John Markoff, John Nash: game theory, John von Neumann, linear programming, meta analysis, meta-analysis, Nate Silver, p-value, Pierre-Simon Laplace, placebo effect, prediction markets, RAND corporation, recommendation engine, Renaissance Technologies, Richard Feynman, Richard Feynman, Richard Feynman: Challenger O-ring, Ronald Reagan, speech recognition, statistical model, stochastic process, Thomas Bayes, Thomas Kuhn: the structure of scientific revolutions, traveling salesman, Turing machine, Turing test, uranium enrichment, Yom Kippur War

Newton, as Jeffreys pointed out, derived his law of gravity 100 years before Laplace proved it by discovering Jupiter’s and Saturn’s 877-year cycle: “There has not been a single date in the history of the law of gravitation when a modern significance test would not have rejected all laws [about gravitation] and left us with no law.”50 Bayes, on the other hand, “makes it possible to modify a law that has stood criticism for centuries without the need to suppose that its originator and his followers were useless blunderers.”51 Jeffreys concluded that p-values fundamentally distorted science. Frequentists, he complained, “appear to regard observations as a basis for possibly rejecting hypotheses, but in no case for supporting them.”52 But odds are that at least some of the hypotheses Fisher rejected were worth investigating or were actually true. A frequentist who tests a precise hypothesis and obtains a p-value of .04, for example, can consider that significant evidence against the hypothesis. But Bayesians say that even with a .01 p-value (which many frequentists would see as extremely strong evidence against a hypothesis) the odds in its favor are still 1 to 9 or 10—“not earth-shaking,” says Jim Berger, a Bayesian theorist at Duke University. P-values still irritate Bayesians. Steven N. Goodman, a distinguished Bayesian biostatistician at Johns Hopkins Medical School, complained in 1999, “The p-value is almost nothing sensible you can think of.

As the statistician Dennis Lindley wrote, Jeffreys “would admit a probability for the existence of the greenhouse effect, whereas most [frequentist] statisticians would not and would confine their probabilities to the data on CO2, ozone, heights of the oceans, etc.”49 Jeffreys was particularly annoyed by Fisher’s measures of uncertainty, his “p-values” and significance levels. The p-value was a probability statement about data, given the hypothesis under consideration. Fisher had developed them for dealing with masses of agricultural data; he needed some way to determine which should be trashed, which filed away, and which followed up on immediately. Comparing two hypotheses, he could reject the chaff and save the wheat. Technically, p-values let laboratory workers state that their experimental outcome offered statistically significant evidence against a hypothesis if the outcome (or a more extreme outcome) had only a small probability (under the hypothesis) of having occurred by chance alone.

Jahn reported that the random event generator produced 18,471 more examples (0.018%) of human influence on his sensitive microelectronic equipment than could be expected with chance alone. Even with a p-value as small as 0.00015, the frequentist would reject the hypothesis (and conclude in favor of psychokinetic powers) while the same evidence convinces a Bayesian that the hypothesis against spiritualism is almost certainly true. Six years later, Jimmie Savage, Harold Lindman, and Ward Edwards at the University of Michigan showed that results using Bayes and the frequentist’s p-values could differ by significant amounts even with everyday-sized data samples; for instance, a Bayesian with any sensible prior and a sample of only 20 would get an answer ten times or more larger than the p-value. Lindley ran afoul of Fisher’s temper when he reviewed Fisher’s third book and found “what I thought was a very basic, serious error in it: Namely, that [Fisher’s] fiducial probability doesn’t obey the rules of probability.


pages: 321 words: 97,661

How to Read a Paper: The Basics of Evidence-Based Medicine by Trisha Greenhalgh

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

call centre, complexity theory, conceptual framework, correlation coefficient, correlation does not imply causation, deskilling, knowledge worker, meta analysis, meta-analysis, microbiome, New Journalism, p-value, personalized medicine, placebo effect, publication bias, randomized controlled trial, selection bias, the scientific method

In order to demonstrate that A has caused B (rather than B causing A, or A and B both being caused by C), you need more than a correlation coefficient. Box 5.1 gives some criteria, originally developed by Sir Austin Bradford Hill [14], which should be met before assuming causality. Probability and confidence Have ‘p-values’ been calculated and interpreted appropriately? One of the first values a student of statistics learns to calculate is the p-value—that is the probability that any particular outcome would have arisen by chance. Standard scientific practice, which is essentially arbitrary, usually deems a p-value of less than one in twenty (expressed as p < 0.05, and equivalent to a betting odds of twenty to one) as ‘statistically significant’, and a p-value of less than one in a hundred (p < 0.01) as ‘statistically highly significant’. By definition, then, one chance association in twenty (this must be around one major published result per journal issue) will appear to be significant when it isn't, and one in a hundred will appear highly significant when it is really what my children call a ‘fluke’.

A result in the statistically significant range (p < 0.05 or p < 0.01 depending on what you have chosen as the cutoff) suggests that the authors should reject the null hypothesis (i.e. the hypothesis that there is no real difference between two groups). But as I have argued earlier (see section ‘Were preliminary statistical questions addressed?’), a p-value in the non-significant range tells you that either there is no difference between the groups or there were too few participants to demonstrate such a difference if it existed. It does not tell you which. The p-value has a further limitation. Guyatt and colleagues conclude thus, in the first article of their ‘Basic Statistics for Clinicians’ series on hypothesis testing using p-values. Why use a single cut-off point [for statistical significance] when the choice of such a point is arbitrary? Why make the question of whether a treatment is effective a dichotomy (a yes-no decision) when it would be more appropriate to view it as a continuum?

If they are not, a paired t or other paired test should be used instead. 3. Only a single pair of measurements should be made on each participant, as the measurements made on successive participants need to be statistically independent of each other if we are to end up with unbiased estimates of the population parameters of interest. 4. Every r-value should be accompanied by a p-value, which expresses how likely an association of this strength would be to have arisen by chance (see section ‘Have ‘p-values’ been calculated and interpreted appropriately?’), or a confidence interval, which expresses the range within which the ‘true’ R-value is likely to lie (see section ‘Have confidence intervals been calculated, and do the authors' conclusions reflect them?’). (Note that lower case ‘r’ represents the correlation coefficient of the sample, whereas upper case ‘R’ represents the correlation coefficient of the entire population.)


pages: 339 words: 112,979

Unweaving the Rainbow by Richard Dawkins

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Any sufficiently advanced technology is indistinguishable from magic, Arthur Eddington, complexity theory, correlation coefficient, David Attenborough, discovery of DNA, double helix, Douglas Engelbart, Douglas Engelbart, I think there is a world market for maybe five computers, Isaac Newton, Jaron Lanier, Mahatma Gandhi, music of the spheres, Necker cube, p-value, phenotype, Ralph Waldo Emerson, Richard Feynman, Richard Feynman, Ronald Reagan, Solar eclipse in 1919, Steven Pinker, Zipf's Law

When we say that an effect is statistically significant, we must always specify a so-called p-value. This is the probability that a purely random process would have generated a result at least as impressive as the actual result. A p-value of 2 in 10,000 is pretty impressive, but it is still possible that there is no genuine pattern there. The beauty of doing a proper statistical test is that we know how probable it is that there is no genuine pattern there. Conventionally, scientists allow themselves to be swayed by p-values of 1 in 100, or even as high as 1 in 20: fair less impressive than 2 in 10,000. What p-value you accept depends upon how important the result is, and upon what decisions might follow from it. If all you are trying to decide is whether it is worth repeating the experiment with a larger sample, a p-value of 0.05, or 1 in 20, is quite acceptable.

Even though there is a 1 in 20 chance that your interesting result would have happened anyway by chance, not much is at stake: the error is not a costly one. If the decision is a life and death matter, as in some medical research, a much lower p-value than 1 in 20 should be sought. The same is true of experiments that purport to show highly controversial results, such as telepathy or 'paranormal' effects. As we briefly saw in connection with DNA fingerprinting, statisticians distinguish false positive from false negative errors, sometimes called type 1 and type 2 errors respectively. A type 2 error, or false negative, is a failure to detect an effect when there really is one. A type 1 error, or false positive, is the opposite: concluding that there really is something going on when actually there is nothing but randomness. The p-value is the measure of the probability that you have made a type 1 error. Statistical judgement means steering a middle course between the two kinds of error.

Birds may be programmed to learn to adjust their policy as a result of their statistical experience. Whether they learn or not, successfully hunting animals must usually behave as if they are good statisticians. (I hope it is not necessary, by the way, to plod through the usual disclaimer: No, no, the birds aren't consciously working it out with calculator and probability tables. They are behaving as if they were calculating p-values. They are no more aware of what a p-value means than you are aware of the equation for a parabolic trajectory when you catch a cricket ball or baseball in the outfield.) Angler fish take advantage of the gullibility of little fish such as gobies. But that is an unfairly value-laden way of putting it. It would be better not to speak of gullibility and say that they exploit the inevitable difficulty the little fish have in steering between type 1 and type 2 errors.


pages: 227 words: 62,177

Numbers Rule Your World: The Hidden Influence of Probability and Statistics on Everything You Do by Kaiser Fung

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

American Society of Civil Engineers: Report Card, Andrew Wiles, Bernie Madoff, Black Swan, call centre, correlation does not imply causation, cross-subsidies, Daniel Kahneman / Amos Tversky, edge city, Emanuel Derman, facts on the ground, fixed income, Gary Taubes, John Snow's cholera map, moral hazard, p-value, pattern recognition, profit motive, Report Card for America’s Infrastructure, statistical model, the scientific method, traveling salesman

The minute probability he computed, one in a quindecillion, is technically known as the p-value and signifies how unlikely the situation was. The smaller the p-value, the more impossible the situation, and the greater its power to refute the no-fraud scenario. Then, statisticians say, the result has statistical significance. Note that this is a matter of magnitude, rather than direction. If the p-value were 20 percent, then there would be a one-in-five chance of seeing at least 200 insider wins in seven years despite absence of fraud, and then Rosenthal would not have sufficient evidence to overturn the fair-lottery hypothesis. Statisticians set a minimum acceptable standard of evidence, which is a p-value of 1 percent or 5 percent. This practice originated with Sir Ronald Fisher, one of the giants of statistical thinking. For a more formal treatment of p-values and statistical significance, look up the topics of hypothesis testing and confidence intervals in a statistics textbook.

. ~###~ In Minnesota, an ambitious experiment was organized to measure how turning off ramp meters on the highway entrances would affect the state of congestion. From the viewpoint of statistical testing, the doubters led by Senator Day wanted to know, if ramp metering was useless, what was the likelihood that the average trip time would rise by 22 percent (the improvement claimed by engineers who run the program) after the meters were shut off? Because this likelihood, or p-value, was small, the consultants who analyzed the experiment concluded that the favorite tool of the traffic engineers was indeed effective at reducing congestion. Since statisticians do not believe in miracles, they avoided the alternative path, which would assert that a rare event—rather than the shutting off of ramp meters—could have produced the deterioration in travel time during the experiment.

See also False negatives; False positives confessions elicited by, 118, 120–21, 125–27, 130 countermeasures, 114, 122 examiner characteristics and role, 113–14 the legal system on, 117–18 major problems with, 129–30 in national-security screening, 96–97, 118, 121–24, 127–30, 175–76 PCASS, 118, 121–24, 127–30, 131, 132, 175 popularity of, 116–18 screening vs. targeted investigation, 123–24 Pre–post analysis, 158–59 Predictably Irrational (Ariely), 158 Prediction of rare events, 124 PulseNet, 31, 41 P-value, 179, 180 Quetelet, Adolphe, 2–3, 4 Queuing theory, 157–58 Quindecillion, 137, 144, 177 Racial/minority groups credit scores and, 52, 54 test fairness and, 64, 65, 70, 72–82, 94, 168–70, 180 Ramp metering, 13–15, 16, 19, 20–24, 157, 158–59, 180–81 Randomization, 170 Rauch, Ernst, 87 Red State, Blue State, Rich State, Poor State (Gelman), 168 Reliability, 10, 12, 14, 19 Riddick, Steve, 105 Riis, Bjarne, 103, 105, 110 Risk Management Solutions, 87 Risk pools, 86–87, 89–94, 168, 171 Rodriguez, Alex, 114 Rodriguez, Ivan, 114 Rolfs, Robert, 36 Rooney, J.


pages: 322 words: 107,576

Bad Science by Ben Goldacre

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Asperger Syndrome, correlation does not imply causation, experimental subject, hygiene hypothesis, Ignaz Semmelweis: hand washing, John Snow's cholera map, Louis Pasteur, meta analysis, meta-analysis, offshore financial centre, p-value, placebo effect, publication bias, Richard Feynman, Richard Feynman, risk tolerance, Ronald Reagan, selection bias, selective serotonin reuptake inhibitor (SSRI), the scientific method, urban planning

In generating his obligatory, spurious, Meadowesque figure—which this time was ‘one in 342 million’—the prosecution’s statistician made a simple, rudimentary mathematical error. He combined individual statistical tests by multiplying p-values, the mathematical description of chance, or statistical significance. This bit’s for the hardcore science nerds, and will be edited out by the publisher, but I intend to write it anyway: you do not just multiply p-values together, you weave them with a clever tool, like maybe ‘Fisher’s method for combination of independent p-values’. If you multiply p-values together, then harmless and probable incidents rapidly appear vanishingly unlikely. Let’s say you worked in twenty hospitals, each with a harmless incident pattern: say p=0.5. If you multiply those harmless p-values, of entirely chance findings, you end up with a final p-value of 0.5 to the power of twenty, which is p < 0.000001, which is extremely, very, highly statistically significant.

Presented with a small increase like this, you have to think: is it statistically significant? I did the maths, and the answer is yes, it is, in that you get a p-value of less than 0.05. What does ‘statistically significant’ mean? It’s just a way of expressing the likelihood that the result you got was attributable merely to chance. Sometimes you might throw ‘heads’ five times in a row, with a completely normal coin, especially if you kept tossing it for long enough. Imagine a jar of 980 blue marbles, and twenty red ones, all mixed up: every now and then—albeit rarely—picking blindfolded, you might pull out three red ones in a row, just by chance. The standard cut-off point for statistical significance is a p-value of 0.05, which is just another way of saying, ‘If I did this experiment a hundred times, I’d expect a spurious positive result on five occasions, just by chance.’

Will our increase in cocaine use, already down from ‘doubled’ to ‘35.7 per cent’, even survive? No. Because there is a final problem with this data: there is so much of it to choose from. There are dozens of data points in the report: on solvents, cigarettes, ketamine, cannabis, and so on. It is standard practice in research that we only accept a finding as significant if it has a p-value of 0.05 or less. But as we said, a p-value of 0.05 means that for every hundred comparisons you do, five will be positive by chance alone. From this report you could have done dozens of comparisons, and some of them would indeed have shown increases in usage—but by chance alone, and the cocaine figure could be one of those. If you roll a pair of dice often enough, you will get a double six three times in a row on many occasions.


pages: 62 words: 14,996

SciPy and NumPy by Eli Bressert

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Debian, Guido van Rossum, p-value

import numpy as np from scipy import stats # Generating a normal distribution sample # with 100 elements sample = np.random.randn(100) # normaltest tests the null hypothesis. out = stats.normaltest(sample) print('normaltest output') print('Z-score = ' + str(out[0])) print('P-value = ' + str(out[1])) # kstest is the Kolmogorov-Smirnov test for goodness of fit. # Here its sample is being tested against the normal distribution. # D is the KS statistic and the closer it is to 0 the better. out = stats.kstest(sample, 'norm') print('\nkstest output for the Normal distribution') print('D = ' + str(out[0])) print('P-value = ' + str(out[1])) # Similarly, this can be easily tested against other distributions, # like the Wald distribution. out = stats.kstest(sample, 'wald') print('\nkstest output for the Wald distribution') print('D = ' + str(out[0])) print('P-value = ' + str(out[1])) Researchers commonly use descriptive functions for statistics. Some descriptive functions that are available in the stats package include the geometric mean (gmean), the skewness of a sample (skew), and the frequency of values in a sample (itemfreq).


pages: 119 words: 10,356

Topics in Market Microstructure by Ilija I. Zovko

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Brownian motion, computerized trading, continuous double auction, correlation coefficient, financial intermediation, Gini coefficient, information asymmetry, market design, market friction, market microstructure, Murray Gell-Mann, p-value, quantitative trading / quantitative finance, random walk, stochastic process, stochastic volatility, transaction costs

Significant slope coefficients show that if two institutions’ strategies were correlated in one month, they are likely to be correlated in the next one as well. The table does not contain the off-book market because we cannot reconstruct institution codes for the off-book market in the same way as we can for the on-book market. The ± values are the standard error of the coefficient estimate and the values in the parenthesis are the standard p-values. On-book market Stock Intercept Slope R2 AAL -0.010 ± 0.004 (0.02) 0.25 ± 0.04 (0.00) 0.061 AZN -0.01 ± 0.003 (0.00) 0.14 ± 0.03 (0.00) 0.019 LLOY 0.003 ± 0.003 (0.28) 0.23 ± 0.02 (0.00) 0.053 VOD 0.008 ± 0.001 (0.00) 0.17 ± 0.01 (0.00) 0.029 does not work for institutions that do not trade frequently5 . Therefore, the results reported in this section concern only the on-book market and are based mostly on more active institutions.

Some of the explanatory variables, such as signed trades and signed volume, are strongly correlated. This may lead to instabilities in coefficient estimates for those variables and we need to keep this in mind when interpreting results. The results for the on- and off-book markets, as well as for the daily and hourly returns are collected in table II. Apart from the value of the coefficient, its error and p-value, we list also Rs2 and Rp2 . Rs2 is the value of R-square of a regression with only the selected variable, and no others, included. It is equal to the square root of the absolute value of the correlation between the variable and the 86 Rs2 Rp2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 R2 = 0.00 off-book Error 0.006 (0.0) 0.008 (0.0) 0.009 (0.0) 0.008 (0.0) Coef. -0.075 0.019 -0.031 0.033 on-book market Error Rs2 Rp2 0.05 (0.0) 0.24 0.06 0.04 (0.0) 0.04 0.00 0.04 (0.0) 0.19 0.01 0.05 (0.0) 0.06 0.01 R2 = 0.28 Coef. 3.03 0.54 -1.23 1.46 Hourly δV (signed volume) δE (entropy) δN (no. firms) δT (no. signed trades) Overall Daily δV (signed volume) δE (entropy) δN (no. firms) δT (no. signed trades) Overall Coef. 0.40 0.20 -0.17 0.00 on-book market Error Rs2 Rp2 0.01 (0.0) 0.21 0.10 0.01 (0.0) 0.12 0.03 0.01 (0.0) 0.15 0.02 0.01 (0.7) 0.07 0.00 R2 = 0.32 Coef. -0.104 0.039 -0.050 0.008 off-book Error 0.007 (0.0) 0.008 (0.0) 0.009 (0.0) 0.008 (0.3) Rs2 Rp2 0.05 0.02 0.03 0.00 0.03 0.00 0.02 0.00 R2 = 0.07 CHAPTER 5.

. -0.104 0.039 -0.050 0.008 off-book Error 0.007 (0.0) 0.008 (0.0) 0.009 (0.0) 0.008 (0.3) Rs2 Rp2 0.05 0.02 0.03 0.00 0.03 0.00 0.02 0.00 R2 = 0.07 CHAPTER 5. MARKET IMBALANCES AND STOCK RET.: HETEROGENEITY OF ORDER SIZES AT THE LSE Table 5.2: Regression results showing the significance of the market imbalance variables on price returns. Columns from left to right are estimated coefficient, its error and in the parenthesis the p-value of the test that the coefficient is zero assuming normal statistics; Rs2 is the value of R2 in a regression where only the selected variable is present in the regression. It expresses how much the variable on its own (solo) explains price returns. Final column Rp2 is the partial R2 of the selected variable. It expresses how much the variable explains price returns above the other three variables.

The Intelligent Asset Allocator: How to Build Your Portfolio to Maximize Returns and Minimize Risk by William J. Bernstein

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

asset allocation, backtesting, capital asset pricing model, commoditize, computer age, correlation coefficient, diversification, diversified portfolio, Eugene Fama: efficient market hypothesis, fixed income, index arbitrage, index fund, intangible asset, Long Term Capital Management, p-value, passive investing, prediction markets, random walk, Richard Thaler, risk tolerance, risk-adjusted returns, risk/return, South Sea Bubble, survivorship bias, the rule of 72, the scientific method, time value of money, transaction costs, Vanguard fund, Yogi Berra, zero-coupon bond

In which case he is probably not skilled, since it would not be unusual for 1 of 30 individuals to experience a 1.1% random event.On the other hand, if his performance measured is out of sample—that is, we had picked him alone among his teammates—then he probably is skilled, since we would have only one chance at a 1.1% occurrence in a random batting world. An only slightly more complex formulation is used to evaluate money managers. One has to be extremely careful to distinguish out-of-sample from in-sample performance. One should not be surprised if one picks out the best-performing manager out of 500 and finds that his p value is .001. However, if one identifies him ahead of time, and then his performance p value is .001 after the fact, then he probably is skilled. 87 88 The Intelligent Asset Allocator Table 6-1. Subsequent Performance of Top Performing Funds, 1970–1998 Top 30 funds 1970–1974 All funds S&P 500 Top 30 funds 1975–1979 All funds S&P 500 Top 30 funds 1980–1984 All funds S&P 500 Top 30 funds 1985–1989 All funds S&P 500 Top 30 funds 1990–1994 All funds S&P 500 SOURCE: Return 1970–1974 Return 1975–1998 0.78% ⫺6.12% ⫺2.35% 16.05% 16.38% 17.04% Return 1975–1979 Return 1980–1998 35.70% 20.44% 14.76% 15.78% 15.28% 17.67% Return 1980–1984 Return 1985–1998 22.51% 14.83% 14.76% 16.01% 15.59% 18.76% Return 1985–1989 Return 1990–1998 22.08% 16.40% 20.41% 16.24% 15.28% 17.81% Return 1990–1994 Return 1995–1998 18.94% 9.39% 8.69% 21.28% 24.60% 32.18% DFA/Micropal/Standard and Poor’s.

In other words, in a random world an annual 0.020/兹10 SD of 20 points translates into an SD of 6.3 points over 10 years. The difference between the batter’s performance and the mean is .020, and dividing that by the SE of .0063 gives a “z value” of 3.17. Since we are considering 10 years, performance, there are 9 “degrees of freedom.” The z value and degrees of freedom are fed into a “t distribution function” on our spreadsheet, and out pops a p value of .011. In other words, in a “random batting” world, there is a 1.1% chance of a given batter averaging .280 over 10 seasons. Whether or not we consider such a batter skilled also depends on whether we are observing him “in sample” or “out of sample.” In sample means that we picked him out of a large number of batters—say, all of his teammates—after the fact. In which case he is probably not skilled, since it would not be unusual for 1 of 30 individuals to experience a 1.1% random event.On the other hand, if his performance measured is out of sample—that is, we had picked him alone among his teammates—then he probably is skilled, since we would have only one chance at a 1.1% occurrence in a random batting world.

The alpha is the difference between the fund’s performance and that of the regression-determined benchmark and a measure of how well the manager has performed. It is expressed the same way as return, in percent per year, and can be positive or negative. For example, if a manager has an alpha of ⫺4% per year this means that the manager has underperformed the regression-determined benchmark by 4% 90 The Intelligent Asset Allocator annually. Oakmark’s alpha for the first 29 months is truly spectacular, and quite statistically significant, with a p value of .0004. This means that there was less than a 1-in-2000 possibility that the fund’s superb performance in the first 29 months could have been due to chance. Unfortunately, its performance in the last 29-month period was equally impressive, but in the wrong direction. My interpretation of the above data is that Mr. Sanborn is modestly skilled. “Modestly skilled” is not at all derogatory in this context, since 99% of fund managers demonstrate no evidence of skill whatsoever.


pages: 197 words: 35,256

NumPy Cookbook by Ivan Idris

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

business intelligence, cloud computing, computer vision, Debian, en.wikipedia.org, Eratosthenes, mandelbrot fractal, p-value, sorting algorithm, statistical model, transaction costs, web application

Again, we will calculate the log returns of the close price of this stock, and use that as an input for the normality test function. This function returns a tuple containing a second element—a p-value between zero and one. The complete code for this tutorial is as follows: import datetime import numpy from matplotlib import finance from statsmodels.stats.adnorm import normal_ad import sys #1. Download price data # 2011 to 2012 start = datetime.datetime(2011, 01, 01) end = datetime.datetime(2012, 01, 01) print "Retrieving data for", sys.argv[1] quotes = finance.quotes_historical_yahoo(sys.argv[1], start, end, asobject=True) close = numpy.array(quotes.close).astype(numpy.float) print close.shape print normal_ad(numpy.diff(numpy.log(close))) The following shows the output of the script with p-value of 0.13: Retrieving data for AAPL (252,) (0.57103805516803163, 0.13725944999430437) How it works...

Download price data # 2011 to 2012 start = datetime.datetime(2011, 01, 01) end = datetime.datetime(2012, 01, 01) print "Retrieving data for", sys.argv[1] quotes = finance.quotes_historical_yahoo(sys.argv[1], start, end, asobject=True) close = numpy.array(quotes.close).astype(numpy.float) print close.shape print normal_ad(numpy.diff(numpy.log(close))) The following shows the output of the script with p-value of 0.13: Retrieving data for AAPL (252,) (0.57103805516803163, 0.13725944999430437) How it works... This recipe demonstrated the Anderson Darling statistical test for normality, as found in scikits-statsmodels. We used the stock price data, which does not have a normal distribution, as input. For the data, we got a p-value of 0.13. Since probabilities range between zero and one, this confirms our hypothesis. Installing scikits-image scikits image is a toolkit for image processing, which requires PIL, SciPy, Cython, and NumPy. There are Windows installers available for it. It is part of Enthought Python Distribution, as well as the Python(x, y) distribution. How to do it... As usual, we can install using either of the following two commands: pip install -U scikits-image easy_install -U scikits-image Again, you might need to run these commands as root.


pages: 755 words: 121,290

Statistics hacks by Bruce Frey

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Bayesian statistics, Berlin Wall, correlation coefficient, Daniel Kahneman / Amos Tversky, distributed generation, en.wikipedia.org, feminist movement, game design, Hacker Ethic, index card, Milgram experiment, p-value, place-making, RFID, Search for Extraterrestrial Intelligence, SETI@home, Silicon Valley, statistical model, Thomas Bayes

Power In social science research, a statistical analysis frequently determines whether a certain value observed in a sample is likely to have occurred by chance. This process is called a test of significance. Tests of significance produce a p-value (probability value), which is the probability that the sample value could have been drawn from a particular population of interest. The lower the p-value, the more confident we are in our beliefs that we have achieved statistical significance and that our data reveals a relationship that exists not only in our sample but also in the whole population represented by that sample. Usually, a predetermined level of significance is chosen as a standard for what counts. If the eventual p-value is equal to or lower than that predetermined level of significance, then the researcher has achieved a level of significance. Statistical analyses and tests of significance are not limited to identifying relationships among variables, but the most common analyses (t tests, F tests, chi-squares, correlation coefficients, regression equations, etc.) usually serve this purpose.

The power of a statistical test is the probability that, given that there is a relationship among variables in the population, the statistical analysis will result in the decision that a level of significance has been achieved. Notice this is a conditional probability. There must be a relationship in the population to find; otherwise, power has no meaning. Power is not the chance of finding a significant result; it is the chance of finding that relationship if it is there to find. The formula for power contains three components: Sample size The predetermined level of significance (p-value) to beat (be less than) The effect size (the size of the relationship in the population) Conducting a Power Analysis Let's say we want to compare two different sample groups and see whether they are different enough that there is likely a real difference in the populations they represent. For example, suppose you want to know whether men or women sleep more. The design is fairly straightforward.

Using this option on the Tools menu, you can test the significance of the regression coefficient using an F test, a statistical test similar to a t test [Hack #17]. The results (a.k.a. the output) are shown in Tables 5-11 and 5-12. Let's see which of the variables best assist us in predicting whether a team will win the Super Bowl. Table Regression statistics Multiple R R square Observations 0.8483 0.7196 30 Table Regression equation VariableCoefficientsT statP-value Intercept -0.784 -1.010 0.323 Easy wins 0.119 4.274 0.000 Attendance 0.000 -0.822 0.416 Hot dogs sold 0.000 1.043 0.308 Gatorade 0.013 2.457 0.022 Weight 0.001 0.580 0.567 Table 5-12 shows a coefficient (a weight) for each of the five variables that were entered into the equation to test how well each one predicts Super Bowl wins. For example, the coefficient associated with "Easy wins" is .119.


pages: 836 words: 158,284

The 4-Hour Body: An Uncommon Guide to Rapid Fat-Loss, Incredible Sex, and Becoming Superhuman by Timothy Ferriss

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

23andMe, airport security, Albert Einstein, Black Swan, Buckminster Fuller, carbon footprint, cognitive dissonance, Columbine, correlation does not imply causation, Dean Kamen, game design, Gary Taubes, index card, Kevin Kelly, knowledge economy, life extension, lifelogging, Mahatma Gandhi, microbiome, p-value, Parkinson's law, Paul Buchheit, placebo effect, Productivity paradox, publish or perish, Ralph Waldo Emerson, Ray Kurzweil, Richard Feynman, Richard Feynman, selective serotonin reuptake inhibitor (SSRI), Silicon Valley, Silicon Valley startup, Skype, stem cell, Steve Jobs, survivorship bias, Thorstein Veblen, Vilfredo Pareto, wage slave, William of Occam

Let the journals catch up later—you don’t have to wait. P-Value: One Number to Understand Statistical thinking will one day be as necessary for effective citizenship as the ability to read and write. —H. G. Wells, who created national hysteria with his radio adaptation of his science fiction book The War of the Worlds British MD and quack buster Ben Goldacre, contributor of the next chapter, is well known for illustrating how people can be fooled by randomness. He uses the following example: If you go to a cocktail party, what’s the likelihood that two people in a group of 23 will share the same birthday? One in 100? One in 50? In fact, it’s one in two. Fifty percent. To become better at spotting randomness for what it is, it’s important to understand the concept of “p-value,” which you’ll see in all good research studies.

To become better at spotting randomness for what it is, it’s important to understand the concept of “p-value,” which you’ll see in all good research studies. It answers the question: how confident are we that this result wasn’t due to random chance? To demonstrate (or imply) cause-and-effect, the gold standard for studies is a p-value of less than 0.05 (p < 0.05), which means a less than 5% likelihood that the result can be attributed to chance. A p-value of less than 0.05 is also what most scientists mean when they say something is “statistically significant.” An example makes this easy to understand. Let’s say you are a professional coin flipper, but you’re unethical. In hopes of dominating the coin-flipping gambling circuit, you’ve engineered a quarter that should come up heads more often than a normal quarter. To test it, you flip it and a normal quarter 100 times, and the results seem clear: the “normal” quarter came up heads 50 times, and your designer quarter came up heads 60 times!

In other words, you better make sure that 20% holds up with at least 453 flips with each coin. In this case, 10 extra flips out of 100 doesn’t prove cause-and- effect at all. Three points to remember about p-values and “statistical significance”: • Just because something seems miraculous doesn’t mean it is. People are fooled by randomness all the time, as in the birthday example. • The larger the difference between groups, the smaller the groups can be. Critics of small trials or self-experimentation often miss this. If something appears to produce a 300% change, you don’t need that many people to show significance, assuming you’re controlling variables. • It is not kosher to combine p-values from multiple experiments to make something more or less believable. That’s another trick of bad scientists and mistake of uninformed journalists. TOOLS AND TRICKS The Black Swan by Nassim Taleb (www.fourhourbody.com/blackswan) Taleb, also author of the bestseller Fooled by Randomness, is the reigning king when it comes to explaining how we fool ourselves and how we can limit the damage.


pages: 1,065 words: 229,099

Real World Haskell by Bryan O'Sullivan, John Goerzen, Donald Stewart, Donald Bruce Stewart

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

bash_history, database schema, Debian, digital map, distributed revision control, domain-specific language, en.wikipedia.org, Firefox, general-purpose programming language, Guido van Rossum, job automation, Larry Wall, p-value, Plutocrats, plutocrats, revision control, sorting algorithm, transfer pricing, type inference, web application, Y Combinator

With this p_series function, parsing an array is simple: -- file: ch16/JSONParsec.hs p_array :: CharParser () (JAry JValue) p_array = JAry <$> p_series '[' p_value ']' Dealing with a JSON object is hardly more complicated, requiring just a little additional effort to produce a name/value pair for each of the object’s fields: -- file: ch16/JSONParsec.hs p_object :: CharParser () (JObj JValue) p_object = JObj <$> p_series '{' p_field '}' where p_field = (,) <$> (p_string <* char ':' <* spaces) <*> p_value Parsing an individual value is a matter of calling an existing parser, and then wrapping its result with the appropriate JValue constructor: -- file: ch16/JSONParsec.hs p_value :: CharParser () JValue p_value = value <* spaces where value = JString <$> p_string <|> JNumber <$> p_number <|> JObject <$> p_object <|> JArray <$> p_array <|> JBool <$> p_bool <|> JNull <$ string "null" <?

With this p_series function, parsing an array is simple: -- file: ch16/JSONParsec.hs p_array :: CharParser () (JAry JValue) p_array = JAry <$> p_series '[' p_value ']' Dealing with a JSON object is hardly more complicated, requiring just a little additional effort to produce a name/value pair for each of the object’s fields: -- file: ch16/JSONParsec.hs p_object :: CharParser () (JObj JValue) p_object = JObj <$> p_series '{' p_field '}' where p_field = (,) <$> (p_string <* char ':' <* spaces) <*> p_value Parsing an individual value is a matter of calling an existing parser, and then wrapping its result with the appropriate JValue constructor: -- file: ch16/JSONParsec.hs p_value :: CharParser () JValue p_value = value <* spaces where value = JString <$> p_string <|> JNumber <$> p_number <|> JObject <$> p_object <|> JArray <$> p_array <|> JBool <$> p_bool <|> JNull <$ string "null" <?> "JSON value" p_bool :: CharParser () Bool p_bool = True <$ string "true" <|> False <$ string "false" The choice combinator allows us to represent this kind of ladder-of-alternatives as a list. It returns the result of the first parser to succeed: -- file: ch16/JSONParsec.hs p_value_choice = value <* spaces where value = choice [ JString <$> p_string , JNumber <$> p_number , JObject <$> p_object , JArray <$> p_array , JBool <$> p_bool , JNull <$ string "null" ] <?> "JSON value" This leads us to the two most interesting parsers, for numbers and strings. We’ll deal with numbers first, since they’re simpler: -- file: ch16/JSONParsec.hs p_number :: CharParser () Double p_number = do s <- getInput case readSigned readFloat s of [(n, s')] -> n <$ setInput s' _ -> empty Our trick here is to take advantage of Haskell’s standard number parsing library functions, which are defined in the Numeric module.

Debtor Nation: The History of America in Red Ink (Politics and Society in Modern America) by Louis Hyman

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

asset-backed security, bank run, barriers to entry, Bretton Woods, card file, central bank independence, computer age, corporate governance, credit crunch, declining real wages, deindustrialization, diversified portfolio, financial independence, financial innovation, fixed income, Gini coefficient, Home mortgage interest deduction, housing crisis, income inequality, invisible hand, late fees, London Interbank Offered Rate, market fundamentalism, means of production, mortgage debt, mortgage tax deduction, p-value, pattern recognition, profit maximization, profit motive, risk/return, Ronald Reagan, Silicon Valley, statistical model, technology bubble, the built environment, transaction costs, union organizing, white flight, women in the workforce, working poor, zero-sum game

., 1940–1950,” box 4, Commissioner’s Correspondence and subject file, 1938–1958, RG 31 Records of the Federal Housing Administration, NARA, 6. 46. Kenneth Wells to Guy T.O. Holladay, June 24, 1953, folder “Minority Group Housing – Printed Material, Speeches, Field Letters, Etc., 1940–1950,” box 4, Commissioner’s Correspondence and subject file, 1938–1958, RG 31 Records of the Federal Housing Administration, NARA. 47. P-value = 0.0001. 48. Once again, race had a p-value of > 0.586. The racial co-efficient, moreover, dropped to only a little over $500. 49. Linear regression with mortgage-having subpopulation for mortgage amount, race (P > 0.586) was not significant, and location (P > 0.006) was. NOTES TO CHAPTER 5 335 50. Pearson test for suburban dummy variable was (P > 0.42). 51. Linear regression of mortgage controlling for race (P > 0.269), location (P > 0.019), federal loan status (P > 0.003), and income (P > 0.000).

The most important statistical advances made since the late 1950s, for the purposes of this analysis, are the ability to adjust for the internal correlation of primary sampling units, logistic regression, and censored normal regressions—all of which are used in this chapter, especially the first two mentioned. In terms of questions, this chapter pays far greater attention to the intersections of race, class, and location than the original published survey, which was mostly a collection of bar graphs and averages. For the less technically inclined reader, explanations of NOTES TO CHAPTER 5 331 some of the statistical methods will be in the notes. For the more technically inclined reader, p-values of relevant tests and regressions have generally been put in the notes. 3. William H. Whyte, “Budgetism: Opiate of the Middle Class,” Fortune (May 1956), 133, 136–37. 4. John Lebor, “Requirements for Profitable Credit Selling,” Credit Management Year Book 1959–1960 (New York: National Retail Dry Goods Association, 1959), 12. 5. Malcolm McNair, “Changing Retail Scene and What Lies Ahead,” National Retail Merchants Association Convention Speech, January 8, 1962, Historical Collections, BAK, 12. 6.

See Melvin Oliver’s Black Wealth, White Wealth: A New Perspective on Racial Inequality (New York, Routledge, 1995), for more on the importance of wealth inequality compared to income inequality today. As discussed later in the chapter, at the same income levels, African Americans always borrowed more frequently than whites and had lower wealth levels. 22. This was determined by running a series of regressions on debt and liquid assets, while controlling for location, mortgage status, marital status, and income. P-values for liquid assets in all models (P > 0.00). For whites, the model had R2 = 0.12 and for whites R2 = 0.41. 23. Odds ratio 5.42 with (P > 0.01) [1.44, 20.41]. 24. A linear regression with a suburban debtor subpopulation shows race (P > 0.248) and liquid assets (P > 0.241) to have no relationship to the amount borrowed unlike mortgage status (P > 0.000) and income (P > 0.013). 25. Suburban dummy variable for black households with (P > 0.02).

Exploring Everyday Things with R and Ruby by Sau Sheong Chang

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Alfred Russel Wallace, bioinformatics, business process, butterfly effect, cloud computing, Craig Reynolds: boids flock, Debian, Edward Lorenz: Chaos theory, Gini coefficient, income inequality, invisible hand, p-value, price stability, Ruby on Rails, Skype, statistical model, stem cell, Stephen Hawking, text mining, The Wealth of Nations by Adam Smith, We are the 99%, web application, wikimedia commons

Without going in depth into the mathematics of this test (which would probably fill up a whole section, if not an entire chapter, on its own), let’s examine the initial population by assuming that the population is normally distributed and running the Shapiro-Wilk test on it: > data <- read.table("money.csv", header=F, sep=",") > row <- as.vector(as.matrix(data[1,])) > row [1] 56 79 66 74 96 54 91 59 70 95 65 82 64 80 63 68 69 69 72 89 64 53 87 49 [47] 68 66 80 89 57 73 72 82 76 58 57 78 94 73 83 52 75 71 52 57 76 59 63 ... > shapiro.test(row) Shapiro-Wilk normality test data: row W = 0.9755, p-value = 0.3806 > As you can see, the p-value is 0.3806, which (on a a scale of 0.0 to 1.0) is not small, and therefore the null hypothesis is not rejected. The null hypothesis is that of no change (i.e., the assumption that the distribution is normal). Strictly speaking, this doesn’t really prove that the distribution is normal, but a visual inspection of the first histogram chart in Figure 8-3 tells us that the likelihood of a normal distribution is high.


pages: 416 words: 39,022

Asset and Risk Management: Risk Oriented Finance by Louis Esch, Robert Kieffer, Thierry Lopez

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

asset allocation, Brownian motion, business continuity plan, business process, capital asset pricing model, computer age, corporate governance, discrete time, diversified portfolio, fixed income, implied volatility, index fund, interest rate derivative, iterative process, P = NP, p-value, random walk, risk/return, shareholder value, statistical model, stochastic process, transaction costs, value at risk, Wiener process, yield curve, zero-coupon bond

In the same way, the parameter VaR ∗ is calculated simply, for a normal distribution, VaR q ∗ = −zq · σ (pt ). The values of zq are found in the normal distribution tables.7 A few examples of these values are given in Table 6.2. This shows that the expression Table 6.2 Normal distribution quantiles q 0.500 0.600 0.700 0.800 0.850 0.900 0.950 0.960 0.970 0.975 0.980 0.985 0.990 0.995 6 7 zq 0.0000 0.2533 0.5244 0.8416 1.0364 1.2816 1.6449 1.7507 1.8808 1.9600 2.0537 2.1701 2.3263 2.5758 Jorion P., Value At Risk, McGraw-Hill, 2001. Pearson E. S. and Hartley H. O., Biometrika Tables for Statisticians, Biometrika Trust, 1976, p. 118. Theory of VaR 189 Example If a security gives an average profit of 100 over the reference period with a standard deviation of 80, we have E(pt ) = 100 and σ (pt ) = 80, which allows us to write: VaR 0.95 = 100 − (1.6449 × 80) = −31.6 VaR 0.975 = 100 − (1.9600 × 80) = −56.8 VaR 0.99 = 100 − (2.3263 × 80) = −86.1 The loss incurred by this security will only therefore exceed 31.6 (56.8 and 86.1 respectively) five times (2.5 times and once respectively) in 100 times.

Factor 3 Systematic risk of the portfolio Variable A Variable C Variable B Factor 2 Variable D Factor 1 Figure 11.5 Independent allocation Institutional Management: APT Applied to Investment Funds 289 APT – factor 3 Systematic risk of the portfolio Growth Not explained Value APT – factor 2 APT – factor 1 Figure 11.6 Joint allocation 11.4.2 Joint allocation: ‘value’ and ‘growth’ example As the systematic risk of the portfolio is expressed by its APT factor-sensitivity vector, it can be broken down into the explicative variables ‘growth’ and ‘value’, representing the S&P Value and the S&P Growth (Figure 11.6). One cannot, however, be content with projecting the portfolio risk vector onto each of the variables. In fact, the ‘growth’ and ‘value’ variables are not necessarily independent statistically. They cannot therefore be represented by geometrically orthogonal variables. It is in fact essential to project the portfolio risk vector perpendicularly onto the space of the vectors of the variables.

., Mathematics of Physics and Modern Engineering, McGrawHill, 1966. CHAPTER 6 Blattberg R. and Gonedes N., A comparison of stable and Student descriptions as statistical models for stock prices, Journal of Business, Vol. 47, 1974, pp. 244–80. Fama E., Behaviour of stock market prices, Journal of Business, Vol. 38, 1965, pp. 34–105. Johnson N. L. and Kotz S., Continuous Univariate Distribution, John Wiley & Sons, Inc, 1970. Jorion P., Value at Risk, McGraw-Hill, 2001. Pearson E. S. and Hartley H. O., Biometrika Tables for Students, Biometrika Trust, 1976. CHAPTER 7 Abramowitz M. and Stegun A., Handbook of Mathematical Functions, Dover, 1972. Chase Manhattan Bank NA, The Management of Financial Price Risk, Chase Manhattan Bank NA, 1995. 386 Bibliography Chase Manhattan Bank NA, Value at Risk, its Measurement and Uses, Chase Manhattan Bank NA, undated.


pages: 506 words: 152,049

The Extended Phenotype: The Long Reach of the Gene by Richard Dawkins

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Alfred Russel Wallace, assortative mating, Douglas Hofstadter, Drosophila, epigenetics, Gödel, Escher, Bach, impulse control, Menlo Park, Necker cube, p-value, phenotype, quantitative trading / quantitative finance, selection bias, stem cell

There is a whole family of ‘mixed strategies’ of the form ‘Dig with probability p, enter with probability 1 – p’, and only one of these is the ESS. I said that the two extremes were joined by a continuum. I meant that the stable population frequency of digging, p* (70 per cent or whatever it is), could be achieved by any of a large number of combinations of pure and mixed individual strategies. There might be a wide distribution of p values in individual nervous systems in the population, including some pure diggers and pure enterers. But, provided the total frequency of digging in the population is equal to the critical value p*, it would still be true that digging and entering were equally successful, and natural selection would not act to change the relative frequency of the two subroutines in the next generation. The population would be in an evolutionarily stable state.

Classify all individuals into those that entered with a probability less than 0.1, those that entered with a probability between 0.1 and 0.2, those with a probability between 0.2 and 0.3, between 0.3 and 0.4, 0.4 and 0.5, etc. Then compare the lifetime reproductive successes of wasps in the different classes. But supposing we did this, exactly what would the ESS theory predict? A hasty first thought is that those wasps with a p value close to the equilibrium p* should enjoy a higher success score than wasps with some other value of p: the graph of success against p should peak at an ‘optimum’ at p*. But p* is not really an optimum value, it is an evolutionarily stable value. The theory expects that, when p* is achieved in the population as a whole, digging and entering should be equally successful. At equilibrium, therefore, we expect no correlation between a wasp’s digging probability and her success.

The theory gives us no particular reason to expect that there should be any such variation. Indeed, the analogy with sex ratio theory just mentioned gives positive grounds for expecting that wasps should not vary in digging probability. In accordance with this, a statistical test on the actual data revealed no evidence of inter-individual variation in digging tendency. Even if there were some individual variation, the method of comparing the success of individuals with different p values would have been a crude and insensitive one for comparing the success rates of digging and entering. This can be seen by an analogy. An agriculturalist wishes to compare the efficacy of two fertilizers, A and B. He takes ten fields and divides each of them into a large number of small plots. Each plot is treated, at random, with either A or B, and wheat is sown in all the plots of all the fields.


pages: 447 words: 104,258

Mathematics of the Financial Markets: Financial Instruments and Derivatives Modelling, Valuation and Risk Issues by Alain Ruttiens

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

algorithmic trading, asset allocation, asset-backed security, backtesting, banking crisis, Black Swan, Black-Scholes formula, Brownian motion, capital asset pricing model, collateralized debt obligation, correlation coefficient, Credit Default Swap, credit default swaps / collateralized debt obligations, delta neutral, discounted cash flows, discrete time, diversification, fixed income, implied volatility, interest rate derivative, interest rate swap, margin call, market microstructure, martingale, p-value, passive investing, quantitative trading / quantitative finance, random walk, risk/return, Satyajit Das, Sharpe ratio, short selling, statistical model, stochastic process, stochastic volatility, time value of money, transaction costs, value at risk, volatility smile, Wiener process, yield curve, zero-coupon bond

In particular: at A: the portfolio is 100% invested in the risk-free rate; at B: 100% investment in an efficient portfolio of stocks; between A and B: mixed portfolio, invested at x% in the risk-free rate and (1 − x)% in the efficient portfolio of stocks; beyond B: leveraged portfolio, assuming the investor has borrowed money (at the rf rate) and has then invested >100% of his available resources in an efficient portfolio. For a given investor, characterized by some utility function U, representing his well-being, assuming his wealth as a portfolio P, if the portfolio return were certain (i.e., deterministic), we would have but, more realistically (even if simplified, in the spirit of this theory), if the portfolio P value is normally distributed in returns, with some rP and σP, where f is some function, often considered as a quadratic curve.4 So that, given the property of the CML (i.e., tangent to the efficient frontier), and some U = f(P) curve, the optimal portfolio must be located at the tangent of U to CML, determining the adequate proportion between B and risk-free instrument. To illustrate this, let us compare the case of two investors, Investor #1, with utility function U1, being more risk averse than Investor #2, with utility function U2 (see Figure 4.10).

Practically speaking, the number of previous terms of the series (here, arbitrarily, 18 terms) should have to be optimized, and the parameter a updated for the successive forecasts. Moreover, if the data present irregularities in their succession (changes of trends, mean reversion, etc.), the AR process is unable to incorporate such phenomena and works poorly. The generalized form of the previous case, in order to forecast rt as a function of more than its previous observed value, can be represented as follows: This is called an AR(p) process, involving the previous p values of the series. There is no rule for determining p, provided it is not excessive (by application of the “parcimony principle”). The above relationship looks like a linear regression, but instead of regressing according to a series of independent variables, this regression uses previous values of the dependent variable itself, hence the “autoregression” name. 9.2 THE MOVING AVERAGE (MA) PROCESS Let us consider a series of returns consisting in pure so-called “random numbers” {t}, i.i.d., generally distributed following a normal distribution.


pages: 360 words: 85,321

The Perfect Bet: How Science and Math Are Taking the Luck Out of Gambling by Adam Kucharski

Ada Lovelace, Albert Einstein, Antoine Gombaud: Chevalier de Méré, beat the dealer, Benoit Mandelbrot, butterfly effect, call centre, Chance favours the prepared mind, Claude Shannon: information theory, collateralized debt obligation, correlation does not imply causation, diversification, Edward Lorenz: Chaos theory, Edward Thorp, Everything should be made as simple as possible, Flash crash, Gerolamo Cardano, Henri Poincaré, Hibernia Atlantic: Project Express, if you build it, they will come, invention of the telegraph, Isaac Newton, John Nash: game theory, John von Neumann, locking in a profit, Louis Pasteur, Nash equilibrium, Norbert Wiener, p-value, performance metric, Pierre-Simon Laplace, probability theory / Blaise Pascal / Pierre de Fermat, quantitative trading / quantitative finance, random walk, Richard Feynman, Richard Feynman, Ronald Reagan, Rubik’s Cube, statistical model, The Design of Experiments, Watson beat the top human players on Jeopardy!, zero-sum game

When Pearson compared the length of runs of different colors with the frequencies that he’d expect if the wheels were random, something looked wrong. Runs of two or three of the same color were scarcer than they should have been. And runs of a single color—say, a black sandwiched between two reds—were far too common. Pearson calculated the probability of observing an outcome at least as extreme as this one, assuming that the roulette wheel was truly random. This probability, which he dubbed the p value, was tiny. So small, in fact, that Pearson said that even if he’d been watching the Monte Carlo tables since the start of Earth’s history, he would not have expected to see a result that extreme. He believed it was conclusive evidence that roulette was not a game of chance. The discovery infuriated him. He’d hoped that roulette wheels would be a good source of random data and was angry that his giant casino-shaped laboratory was generating unreliable results.

As the ball traveled around the rim a dozen or so times, he gathered enough information to make predictions about where it would land. He only had time to run the experiment twenty-two times before he had to leave the office. Out of these attempts, he predicted the correct number three times. Had he just been making random guesses, the probability he would have got at least this many right (the p value) was less than 2 percent. This persuaded him that the Eudaemons’ strategy worked. It seemed that roulette really could be beaten with physics. Having tested the method by hand, Small and Tse set up a high-speed camera to collect more precise measurements about the ball’s position. The camera took photos of the wheel at a rate of about ninety frames per second. This made it possible to explore what happened after the ball hit a deflector.

Social Capital and Civil Society by Francis Fukuyama

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Berlin Wall, blue-collar work, Fall of the Berlin Wall, feminist movement, Francis Fukuyama: the end of history, George Akerlof, German hyperinflation, Jane Jacobs, Joseph Schumpeter, Kevin Kelly, labor-force participation, low skilled workers, p-value, Pareto efficiency, postindustrial economy, principal–agent problem, RAND corporation, Silicon Valley, The Death and Life of Great American Cities, transaction costs, World Values Survey

However, it is possible for a group to have an r p coefficient larger than 1. To take the earlier example of the religious sect that encourages honesty and reliability, if these traits are demanded of its members not just in their dealings with other members of the sect but generally in their dealings with other people, then there will be a positive spillover effect into the larger society. Again, Weber argued in effect that sectarian Puritans had an r p value greater than 1. The final factor affecting a society’s supply of social capital concerns not the internal cohesiveness of groups, but rather the way in which they relate to outsiders. Strong moral bonds within a group in some cases may actually serve to decrease the degree to which members of that group are able to trust outsiders and work effectively with them. A highly disciplined, well-organized group sharing strong common values may be capable of highly coordinated collective action, and yet may nonetheless be a social liability.


pages: 1,038 words: 137,468

JavaScript Cookbook by Shelley Powers

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Firefox, Google Chrome, hypertext link, p-value, semantic web, web application, WebSocket

DOCTYPE html> <html dir="ltr" lang="en-US"> <head> <title>Comparing Cookies and sessionStorage</title> <meta http-equiv="Content-Type" content="text/html;charset=utf-8" > <style> div { background-color: #ffff00; margin: 5px; width: 100px; padding: 1px; }</style> <script> window.onload=function() { document.getElementById("set").onclick=setData; document.getElementById("get").onclick=getData; document.getElementById("erase").onclick=removeData; } // set data for both session and cookie function setData() { var key = document.getElementById("key").value; var value = document.getElementById("value").value; // set sessionStorage var current = sessionStorage.getItem(key); if (current) { current+=value; } else { current=value; } sessionStorage.setItem(key,current); 470 | Chapter 20: Persistence // set cookie current = getCookie(key); if (current) { current+=value; } else { current=value; } setCookie(key,current); } function getData() { try { var key = document.getElementById("key").value; // sessionStorage var value = sessionStorage.getItem(key); if (!value) value =""; document.getElementById("sessionstr").innerHTML="<p>" + value + "</p>"; // cookie value = getCookie(key); if (!value) value=""; document.getElementById("cookiestr").innerHTML="<p>" + value + "</p>"; } catch(e) { alert(e); } } function removeData() { var key = document.getElementById("key").value; // sessionStorage sessionStorage.removeItem(key); // cookie eraseCookie(key); } // set session cookie function setCookie(cookie,value) { var tmp=cookie + "=" + encodeURI(value) + ";path=/"; document.cookie=tmp; } // each cookie separated by semicolon; function getCookie(key) { var cookie = document.cookie; var first = cookie.indexOf(key+"="); // cookie exists if (first >= 0) { 20.4 Using sessionStorage for Client-Side Storage | 471 var str = cookie.substring(first,cookie.length); var last = str.indexOf(";"); // if last cookie if (last < 0) last = str.length; // get cookie value str = str.substring(0,last).split("="); return decodeURI(str[1]); } else { return null; } } // set cookie date to the past to erase function eraseCookie (key) { var cookieDate = new Date(); cookieDate.setDate(cookieDate.getDate() - 10); var tmp=key + "= ; expires="+cookieDate.toGMTString()+"; path=/"; document.cookie=tmp; } </script> </head> <body> <form> <label for="key"> Enter key:</label> <input type="text" id="key" /> <br /> <br /> <label for="value">Enter value:</label> <input type="text" id="value" /><br /><br /> </form> <button id="set">Set data</button> <button id="get">Get data</button> <button id="erase">Erase data</button> <div id="sessionstr"><p></p></div> <div id="cookiestr"><p></p></div> </body> Load the example page (it’s in the book examples) in Firefox 3.5 and up.


pages: 303 words: 67,891

Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms: Proceedings of the Agi Workshop 2006 by Ben Goertzel, Pei Wang

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

AI winter, artificial general intelligence, bioinformatics, brain emulation, combinatorial explosion, complexity theory, computer vision, conceptual framework, correlation coefficient, epigenetics, friendly AI, information retrieval, Isaac Newton, John Conway, Loebner Prize, Menlo Park, natural language processing, Occam's razor, p-value, pattern recognition, performance metric, Ray Kurzweil, Rodney Brooks, semantic web, statistical model, strong AI, theory of mind, traveling salesman, Turing machine, Turing test, Von Neumann architecture, Y2K

Semantic similarities within and across columns of the table seem to be at the same level of strength; however, an objective measure would be necessary to quantify this impression. How can we estimate the statistical significance of cooccurrence of the same words in top portions of two lists in each row of Table 2? Here is one easy way to estimate p-values from above. Given the size of the English core, and assuming that each French-to-English translation is a “blind shot” into the English core (null-hypothesis), we can estimate the probability to find one and the same word in top-twelve portions of both lists: p ~ 2*12*12 / 8,236 = 0.035 (we included the factor 2, because there are two possible ways of aligning the lists with respect to each other4). Therefore, the p-value of the case of word repetition that we see in Table 2 is smaller than 0.035, at least. In conclusion, we have found significant correlations among sorted lists across languages for each of the three PCs.


pages: 436 words: 123,488

Overdosed America: The Broken Promise of American Medicine by John Abramson

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

germ theory of disease, Louis Pasteur, medical malpractice, medical residency, meta analysis, meta-analysis, p-value, placebo effect, profit maximization, profit motive, publication bias, RAND corporation, randomized controlled trial, selective serotonin reuptake inhibitor (SSRI), stem cell, Thomas Kuhn: the structure of scientific revolutions

But then, rather than addressing these serious complications, the authors dismissed them with a most unusual statement: “The difference in major cardiovascular events in the VIGOR trial [of Vioxx] may reflect the play of chance” (italics mine) because “the number of cardiovascular events was small (less than 70).” The comment that a statistically significant finding “may reflect the play of chance” struck me as very odd. Surely the experts who wrote the review article knew that the whole purpose of doing statistics is to determine the degree of probability and the role of chance. Anyone who has taken Statistics 101 knows that p values of .05 or less (p < .05) are considered statistically significant. In this case it means that if the VIGOR study were repeated 100 times, more than 95 of those trials would show that the people who took Vioxx had at least twice as many heart attacks, strokes, and death from any cardiovascular event than the people who took naproxen. And in more than 99 out of those 100 studies, the people who took Vioxx would have at least four times as many heart attacks as the people who took naproxen.

Box 1 Auckland, New Zealand http://www.harpercollins.co.nz United Kingdom HarperCollins Publishers Ltd. 77-85 Fulham Palace Road London, W6 8JB, UK http://www.harpercollins.co.uk United States HarperCollins Publishers Inc. 10 East 53rd Street New York, NY 10022 http://www.harpercollins.com FOOTNOTE *The standard way to determine whether a treatment has a significant effect is to calculate the probability that the observed difference in outcome (improvement or side effect) between the patients in the group that received the new treatment and the group that received the old treatment (or placebo) would have happened by chance if, in fact, the treatment really had no effect whatsoever. The conventional cutoff for determining statistical significance is a probability (p) of the observed difference between the groups occurring purely by chance less than 5 times out of 100 trials, or p < .05. This translates to: “the probability that this difference will occur at random is less than 5 chances in 100 trials.” The smaller the p value, the less likely it is that the difference between the groups happened by chance, and therefore the stronger—i.e., the more statistically significant—the finding. *The blood levels of all three kinds of cholesterol (total, LDL, and HDL) are expressed as “mg/dL,” meaning the number of milligrams of cholesterol present in one-tenth of a liter of serum (the clear liquid that remains after the cells have been removed from a blood sample).

Quantitative Trading: How to Build Your Own Algorithmic Trading Business by Ernie Chan

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

algorithmic trading, asset allocation, automated trading system, backtesting, Black Swan, Brownian motion, business continuity plan, compound rate of return, Edward Thorp, Elliott wave, endowment effect, fixed income, general-purpose programming language, index fund, John Markoff, Long Term Capital Management, loss aversion, p-value, paper trading, price discovery process, quantitative hedge fund, quantitative trading / quantitative finance, random walk, Ray Kurzweil, Renaissance Technologies, risk-adjusted returns, Sharpe ratio, short selling, statistical arbitrage, statistical model, survivorship bias, systematic trading, transaction costs

The following code fragment, however, tests for correlation between the two time series: % A test for correlation. dailyReturns=(adjcls-lag1(adjcls))./lag1(adjcls); [R,P]=corrcoef(dailyReturns(2:end,:)); % R = % % 1.0000 % 0.4849 0.4849 1.0000 P1: JYS c07 JWBK321-Chan September 24, 2008 14:4 Printer: Yet to come Special Topics in Quantitative Trading % % % P = % % % 1 0 133 0 1 % The P value of 0 indicates that the two time series % are significantly correlated. Stationarity is not limited to the spread between stocks: it can also be found in certain currency rates. For example, the Canadian dollar/Australian dollar (CAD/AUD) cross-currency rate is quite stationary, both being commodities currencies. Numerous pairs of futures as well as well as fixed-income instruments can be found to be cointegrating as well.

Meghnad Desai Marxian economic theory by Unknown

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

commoditize, Corn Laws, full employment, land reform, means of production, p-value, price mechanism, profit motive

C' Commodity, usually output commodity; also called commodity capital. L Labour; labour power as sold by the labourer and labour as expended during production. MP Materials of production. L and MP together comprise C, which is the same as P Productive capital. c The difference between C' and C. C Constant capital. V Variable capital. S Surplus value. r = SIV Rate of surplus value. g - = CIC+V Organic composition of capital. P (Value) rate of profit. P Rate of profit (ambiguous as to whether money or value). p OMoney) rate of profit. YI The value of output of Department Y2 The value of output of Department II Y Total value of output PI Price of the commodity produced by Department I. P2 Price of the commodity produced by Department II. P3 Price of the conunodi ty produced by Department Ill. R Total Profit iii In general subscript i stands for the ith Department. hence Cl is the true value of constant capital used in Department I.


pages: 263 words: 75,455

Quantitative Value: A Practitioner's Guide to Automating Intelligent Investment and Eliminating Behavioral Errors by Wesley R. Gray, Tobias E. Carlisle

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

activist fund / activist shareholder / activist investor, Albert Einstein, Andrei Shleifer, asset allocation, Atul Gawande, backtesting, beat the dealer, Black Swan, capital asset pricing model, Checklist Manifesto, cognitive bias, compound rate of return, corporate governance, correlation coefficient, credit crunch, Daniel Kahneman / Amos Tversky, discounted cash flows, Edward Thorp, Eugene Fama: efficient market hypothesis, forensic accounting, hindsight bias, intangible asset, Louis Bachelier, p-value, passive investing, performance metric, quantitative hedge fund, random walk, Richard Thaler, risk-adjusted returns, Robert Shiller, Robert Shiller, shareholder value, Sharpe ratio, short selling, statistical model, survivorship bias, systematic trading, The Myth of the Rational Market, time value of money, transaction costs

We control for general market risk using the capital asset pricing model2; we adjust for market, size, and value exposures with the Fama and French three-factor model3; we account for momentum using the four-factor model4; and, finally, we account for liquidity by adding the Lubos Pastor and Robert Stambaugh market-wide liquidity factor to create the comprehensive five-factor model.5 Figures 12.10(a) and (b) confirm that the Quantitative Value strategy consistently generates alpha on rolling 5- and 10-year bases, regardless of the model we choose to inspect. On a rolling 5-year basis there are only a few short instances where the strategy's performance does not add value after controlling for risk. The 10-year rolling chart tells the story vividly: over the long-term, Quantitative Value has consistently created value for investors. Table 12.5 shows the full sample coefficient estimates for the four asset-pricing models. We set out P-values below each estimate and represent the probability of seeing the estimate given the null hypothesis is zero. MKT-RF represents the excess return on the market-weight returns of all New York Stock Exchange (NYSE)/American Stock Exchange (AMEX)/Nasdaq stocks. SMB is a long/short factor portfolio that captures exposures to small capitalization stocks. HML is a long/short factor portfolio that controls for exposure to high book value-to-market capitalization stocks.


pages: 242 words: 68,019

Why Information Grows: The Evolution of Order, From Atoms to Economies by Cesar Hidalgo

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Ada Lovelace, Albert Einstein, Arthur Eddington, assortative mating, Claude Shannon: information theory, David Ricardo: comparative advantage, Douglas Hofstadter, Everything should be made as simple as possible, frictionless, frictionless market, George Akerlof, Gödel, Escher, Bach, income inequality, income per capita, industrial cluster, information asymmetry, invention of the telegraph, invisible hand, Isaac Newton, James Watt: steam engine, Jane Jacobs, job satisfaction, John von Neumann, New Economic Geography, Norbert Wiener, p-value, Paul Samuelson, phenotype, price mechanism, Richard Florida, Ronald Coase, Rubik’s Cube, Silicon Valley, Simon Kuznets, Skype, statistical model, Steve Jobs, Steve Wozniak, Steven Pinker, The Market for Lemons, The Nature of the Firm, The Wealth of Nations by Adam Smith, total factor productivity, transaction costs, working-age population

Here we consider a country to be an exporter of a product if its percapita exports of that product are at least 25 percent of the world’s average per capita exports of that product. This allows us to control for the size of the product’s global market and the size of the country’s population. 5. In the case of Honduras and Argentina the probability of the observed overlap (what is known academically as its p-value) is 4.4 × 10–4. The same probability is 2 × 10–2 for the overlap observed between Honduras and the Netherlands and 4 × 10–3 for the overlap observed between Argentina and the Netherlands. 6. César A. Hidalgo and Ricardo Hausmann, “The Building Blocks of Economic Complexity,” Proceedings of the National Academy of Sciences 106, no. 26 (2009): 10570–10575. 7. The idea of related varieties is popular in the literature of regional economic development and strategic management.


pages: 245 words: 12,162

In Pursuit of the Traveling Salesman: Mathematics at the Limits of Computation by William J. Cook

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

complexity theory, computer age, Computer Numeric Control, four colour theorem, index card, John von Neumann, linear programming, NP-complete, p-value, RAND corporation, Richard Feynman, Richard Feynman, traveling salesman, Turing machine

To compute this value, we find the minimum of the four sums trip({2, 3, 4, 5}, 2) + cost(2,6) trip({2, 3, 4, 5}, 3) + cost(3,6) trip({2, 3, 4, 5}, 4) + cost(4,6) trip({2, 3, 4, 5}, 5) + cost(5,6) corresponding to the possible choices for the next-to-last city in the subpath from 1 to 6, that is, we optimally travel to the next-to-last city then travel over to city 6. This construction of a five-city tr i p-value from several four-city values is the heart of the Held-Karp method. The algorithm proceeds as follows. We first compute all one-city values: these are easy, for example, tr i p({2}, 2) is just cos t(1, 2). Next, we use the one-city values to compute all two-city values. Then we use the two-city values to compute all three-city values, and on up the line. When we finally get to the (n − 1)-city values, we can read off the cost of an optimal tour: it is the minimum of the sums trip({2,3,. . . , n}, 2) + cost(2,1) trip({2,3,. . . , n}, 3) + cost(3,1) ··· trip({2,3,. . . , n}, n) + cost(n,1) where the cost term accounts for the return trip back to city 1.


pages: 1,088 words: 228,743

Expected Returns: An Investor's Guide to Harvesting Market Rewards by Antti Ilmanen

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Andrei Shleifer, asset allocation, asset-backed security, availability heuristic, backtesting, balance sheet recession, bank run, banking crisis, barriers to entry, Bernie Madoff, Black Swan, Bretton Woods, buy low sell high, capital asset pricing model, capital controls, Carmen Reinhart, central bank independence, collateralized debt obligation, commoditize, commodity trading advisor, corporate governance, credit crunch, Credit Default Swap, credit default swaps / collateralized debt obligations, debt deflation, deglobalization, delta neutral, demand response, discounted cash flows, disintermediation, diversification, diversified portfolio, dividend-yielding stocks, equity premium, Eugene Fama: efficient market hypothesis, fiat currency, financial deregulation, financial innovation, financial intermediation, fixed income, Flash crash, framing effect, frictionless, frictionless market, George Akerlof, global reserve currency, Google Earth, high net worth, hindsight bias, Hyman Minsky, implied volatility, income inequality, incomplete markets, index fund, inflation targeting, information asymmetry, interest rate swap, invisible hand, Kenneth Rogoff, laissez-faire capitalism, law of one price, Long Term Capital Management, loss aversion, margin call, market bubble, market clearing, market friction, market fundamentalism, market microstructure, mental accounting, merger arbitrage, mittelstand, moral hazard, Myron Scholes, negative equity, New Journalism, oil shock, p-value, passive investing, Paul Samuelson, performance metric, Ponzi scheme, prediction markets, price anchoring, price stability, principal–agent problem, private sector deleveraging, purchasing power parity, quantitative easing, quantitative trading / quantitative finance, random walk, reserve currency, Richard Thaler, risk tolerance, risk-adjusted returns, risk/return, riskless arbitrage, Robert Shiller, Robert Shiller, savings glut, selection bias, Sharpe ratio, short selling, sovereign wealth fund, statistical arbitrage, statistical model, stochastic volatility, survivorship bias, systematic trading, The Great Moderation, The Myth of the Rational Market, too big to fail, transaction costs, tulip mania, value at risk, volatility arbitrage, volatility smile, working-age population, Y2K, yield curve, zero-coupon bond, zero-sum game

Given that long-term Treasury yields are below 4%, few observers would extrapolate the realized 4.7% average bond returns into the future. Similar considerations suggest that we might reduce the CPI and D/P components for equities. The fourth column shows that using 2.3% CPI (consensus forecast for long-term inflation) and 2.0% D/P, a forward-looking measure predicts only 5.6% nominal equity returns for the long term. Admittedly the D/P value could be raised if we use a broader carry measure including net share buybacks, so I add 0.75% to the estimate (and call it “D/P+”). Even more bullish return forecasts than 6.4% would have to rely on growth optimism (beyond the historical 1.3% rate of real earnings-per-share growth) or expected further P/E expansion in the coming decades (my analysis assumes none). More generally, these building blocks give us a useful framework for debating the key components of future equity returns.

Should long and short portfolios have simply equal nominal amounts, equal return volatilities, or equal betas? One crucial question is whether persistent industry sector biases should be allowed or whether sector neutrality should be pursued. Sector neutrality. Practitioner studies highlight the empirical benefits of sector-neutral approaches. Yet, academic studies and many popular investment products (FF and LSV, MSCI-Barra and S&P value/growth indices, and the RAFI fundamental index) do nothing to impose sector neutrality. Without any such adjustments, persistent industry concentrations are possible in the long–short portfolio. For example, in early 2008, the long (value) portfolio heavily overweighted finance stocks while the short (growth) portfolio overweighted energy stocks. Such sector biases may or may not boost average returns but they pretty clearly impair value portfolio diversification and thus raise its volatility.

Monte Carlo Simulation and Finance by Don L. McLeish

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Black-Scholes formula, Brownian motion, capital asset pricing model, compound rate of return, discrete time, distributed generation, finite state, frictionless, frictionless market, implied volatility, incomplete markets, invention of the printing press, martingale, p-value, random walk, Sharpe ratio, short selling, stochastic process, stochastic volatility, survivorship bias, the market place, transaction costs, value at risk, Wiener process, zero-coupon bond, zero-sum game

Evaluate the Chi-squared statistic χ2obs for a test that these points are independent uniform on the cube where we divide the cube into 8 subcubes, each having sides of length 1/2. Carry out the test by finding P [χ2 > χ2obs ] where χ2 is a random chi-squared variate with the appropriate number of degrees of freedom. This quantity P [χ2 > χ2obs ] is usually referrred to as the “significance probability” or “p-value” for the test. If we suspected too much uniformity to be consistent with assumption of independent uniform, we might use the other tail of the test, i.e. evaluate P [χ2 < χ2obs ]. Do so and comment on your results.


pages: 354 words: 26,550

High-Frequency Trading: A Practical Guide to Algorithmic Strategies and Trading Systems by Irene Aldridge

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

algorithmic trading, asset allocation, asset-backed security, automated trading system, backtesting, Black Swan, Brownian motion, business process, capital asset pricing model, centralized clearinghouse, collapse of Lehman Brothers, collateralized debt obligation, collective bargaining, computerized trading, diversification, equity premium, fault tolerance, financial intermediation, fixed income, high net worth, implied volatility, index arbitrage, information asymmetry, interest rate swap, inventory management, law of one price, Long Term Capital Management, Louis Bachelier, margin call, market friction, market microstructure, martingale, Myron Scholes, New Journalism, p-value, paper trading, performance metric, profit motive, purchasing power parity, quantitative trading / quantitative finance, random walk, Renaissance Technologies, risk tolerance, risk-adjusted returns, risk/return, Sharpe ratio, short selling, Small Order Execution System, statistical arbitrage, statistical model, stochastic process, stochastic volatility, systematic trading, trade route, transaction costs, value at risk, yield curve, zero-sum game

Table 4.5 reports summary statistics for EUR/USD order flows observed by Citibank and sampled at the weekly frequency between January 1993 and July 1999: A) statistics for weekly EUR/USD order flow aggregated across Citibank’s corporate, trading, and investing customers; and B) order flows from end-user segments cumulated over a week. The last four columns on the right report autocorrelations i at lag i and p-values for the null that (i = 0). The summary statistics on the order flow data are from Evans and Lyons (2007), who define order flow as the total value of EUR/USD purchases (in USD millions) initiated against Citibank’s quotes. Daily Dollar Volume in Most Active Foreign Exchange Products on TABLE 4.4 CME Electronic Trading (Globex) on 6/12/2009 Computed as Average Price Times Total Contract Volume Reported by CME Currency Futures Daily Volume (in USD thousands) Mini-Futures Daily Volume (in USD thousands) Australian Dollar British Pound Canadian Dollar Euro Japanese Yen New Zealand Dollar Swiss Franc 5,389.8 17,575.6 6,988.1 32,037.9 8,371.5 426.5 4,180.6 N/A N/A N/A 525.3 396.2 N/A N/A 45 3.722 −3.715 549.302 −529.055 634.918 −692.419 1710.163 −2024.28 972.106 −629.139 535.32 −874.15 1881.284 −718.895 −0.043 1.234 −16.774 108.685 −59.784 196.089 −4.119 346.296 11.187 183.36 19.442 146.627 15.85 273.406 Maximum Minimum −0.696 9.246 −0.005 3.908 0.026 8.337 0.392 5.86 −1.079 11.226 0.931 9.253 0.105 3.204 Skewness or Kurtosis* −0.037 (0.434) 0.072 (0.223) −0.021 (0.735) −0.098 (0.072) 0.096 (0.085) 0.061 (0.182) −0.061 (0.287) 1 −0.04 (0.608) 0.089 (0.124) 0.024 (0.602) 0.024 (0.660) −0.024 (0.568) 0.107 (0.041) 0.027 (0.603) 2 0.028 (0.569) −0.038 (0.513) 0.126 (0.101) 0.015 (0.747) −0.03 (0.536) −0.03 (0.550) 0.025 (0.643) 4 Autocorrelations Lag −0.028 (0.562) 0.103 (0.091) −0.009 (0.897) 0.083 (0.140) −0.016 (0.690) −0.014 (0.825) −0.015 (0.789) 8 *Skewness of order flows measures whether the flows skew toward either the positive or the negative side of their mean, and kurtosis indicates the likelihood of extremely large or small order flows.


pages: 292 words: 85,151

Exponential Organizations: Why New Organizations Are Ten Times Better, Faster, and Cheaper Than Yours (And What to Do About It) by Salim Ismail, Yuri van Geest

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

23andMe, 3D printing, Airbnb, Amazon Mechanical Turk, Amazon Web Services, augmented reality, autonomous vehicles, Baxter: Rethink Robotics, bioinformatics, bitcoin, Black Swan, blockchain, Burning Man, business intelligence, business process, call centre, chief data officer, Chris Wanstrath, Clayton Christensen, clean water, cloud computing, cognitive bias, collaborative consumption, collaborative economy, commoditize, corporate social responsibility, cross-subsidies, crowdsourcing, cryptocurrency, dark matter, Dean Kamen, dematerialisation, discounted cash flows, distributed ledger, Edward Snowden, Elon Musk, en.wikipedia.org, ethereum blockchain, Galaxy Zoo, game design, Google Glasses, Google Hangouts, Google X / Alphabet X, gravity well, hiring and firing, Hyperloop, industrial robot, Innovator's Dilemma, intangible asset, Internet of things, Iridium satellite, Isaac Newton, Jeff Bezos, Kevin Kelly, Kickstarter, knowledge worker, Kodak vs Instagram, Law of Accelerating Returns, Lean Startup, life extension, lifelogging, loose coupling, loss aversion, Lyft, Marc Andreessen, Mark Zuckerberg, market design, means of production, minimum viable product, natural language processing, Netflix Prize, Network effects, new economy, Oculus Rift, offshore financial centre, p-value, PageRank, pattern recognition, Paul Graham, peer-to-peer, peer-to-peer model, Peter H. Diamandis: Planetary Resources, Peter Thiel, prediction markets, profit motive, publish or perish, Ray Kurzweil, recommendation engine, RFID, ride hailing / ride sharing, risk tolerance, Ronald Coase, Second Machine Age, self-driving car, sharing economy, Silicon Valley, skunkworks, Skype, smart contracts, Snapchat, social software, software is eating the world, speech recognition, stealth mode startup, Stephen Hawking, Steve Jobs, subscription business, supply-chain management, TaskRabbit, telepresence, telepresence robot, Tony Hsieh, transaction costs, Tyler Cowen: Great Stagnation, urban planning, WikiLeaks, winner-take-all economy, X Prize, Y Combinator, zero-sum game

As a result, organizations are not only much more agile, they are also better at learning and unlearning due to the diversity and volume of a flexible workforce. Ideas are also able to circulate much faster. Why Important? Dependencies or Prerequisites • Increase loyalty to ExO • Drives exponential growth • Validates new ideas, and learning • Allows agility and rapid implementation • Amplifies ideation • MTP • Engagement • Authentic and transparent leadership • Low threshold to participate • P2P value creation Algorithms In 2002, Google’s revenues were less than a half-billion dollars. Ten years later, its revenues had jumped 125x and the company was generating a half-billion dollars every three days. At the heart of this staggering growth was the PageRank algorithm, which ranks the popularity of web pages. (Google doesn’t gauge which page is better from a human perspective; its algorithms simply respond to the pages that deliver the most clicks.)


pages: 305 words: 89,103

Scarcity: The True Cost of Not Having Enough by Sendhil Mullainathan

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

American Society of Civil Engineers: Report Card, Andrei Shleifer, Cass Sunstein, clean water, computer vision, delayed gratification, double entry bookkeeping, Exxon Valdez, fault tolerance, happiness index / gross national happiness, impulse control, indoor plumbing, inventory management, knowledge worker, late fees, linear programming, mental accounting, microcredit, p-value, payday loans, purchasing power parity, randomized controlled trial, Report Card for America’s Infrastructure, Richard Thaler, Saturday Night Live, Walter Mischel, Yogi Berra

R. Flynn, “Massive IQ Gains in 14 Nations: What IQ Tests Really Measure,” Psychological Bulletin 101 (1987): 171–91. A forceful case for environmental and cultural influences on IQ is Richard Nisbett’s Intelligence and How to Get It: Why Schools and Cultures Count (New York: W. W. Norton, 2010). people in a New Jersey mall: These experiments are summarized along with details on sample sizes and p-values in Anandi Mani, Sendhil Mullainathan, Eldar Shafir, and Jiaying Zhao, “Poverty Impedes Cognitive Function” (working paper, 2012). unable to come up with $2,000 in thirty days: A. Lusardi, D. J. Schneider, and P. Tufano, Financially Fragile Households: Evidence and Implications (National Bureau of Economic Research, Working Paper No. 17072, May 2011). the effects were equally big: For those interested in the magnitude, the effect size ranged between Cohen’s d of 0.88 and 0.94.


pages: 271 words: 83,944

The Sellout: A Novel by Paul Beatty

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

affirmative action, cognitive dissonance, conceptual framework, desegregation, El Camino Real, haute couture, illegal immigration, Lao Tzu, late fees, mass incarceration, p-value, publish or perish, rolodex, Ronald Reagan, Rosa Parks, telemarketer, theory of mind, War on Poverty, white flight, yellow journalism

Back then he was an assistant professor in urban studies, at UC Brentwood, living in Larchmont with the rest of the L.A. intellectual class, and hanging out in Dickens doing field research for his first book, Blacktopolis: The Intransigence of African-American Urban Poverty and Baggy Clothes. “I think an examination of the confluence of independent variables on income could result in some interesting r coefficients. Frankly, I wouldn’t be surprised by p values in the .75 range.” Despite the smug attitude, Pops took a liking to Foy right away. Though Foy was born and raised in Michigan, it wasn’t often Dad found somebody in Dickens who knew the difference between a t-test and an analysis of variance. After debriefing over a box of donut holes, everyone—locals and Foy included—agreed to meet on a regular basis, and the Dum Dum Donut Intellectuals were born.


pages: 320 words: 33,385

Market Risk Analysis, Quantitative Methods in Finance by Carol Alexander

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

asset allocation, backtesting, barriers to entry, Brownian motion, capital asset pricing model, constrained optimization, credit crunch, Credit Default Swap, discounted cash flows, discrete time, diversification, diversified portfolio, en.wikipedia.org, fixed income, implied volatility, interest rate swap, market friction, market microstructure, p-value, performance metric, quantitative trading / quantitative finance, random walk, risk tolerance, risk-adjusted returns, risk/return, Sharpe ratio, statistical arbitrage, statistical model, stochastic process, stochastic volatility, Thomas Bayes, transaction costs, value at risk, volatility smile, Wiener process, yield curve, zero-sum game

The probability value of the t statistics is also given for convenience, and this shows that whilst the constant term is not significant the log return on the S&P 500 is a very highly significant determinant of the Amex log returns. Table I.4.7 Coefficient estimates for the Amex and S&P 500 model Intercept S&P 500 rtn Coefficients Standard error t stat −00002 12885 00003 00427 −06665 301698 p value 05053 00000 Following the results in Table I.4.7, we may write the estimated model, with t ratios in parentheses, as Ŷ = −00002 + 12885 X −06665 301698 where X and Y are the daily log returns on the S&P 500 and on Amex, respectively. The Excel output automatically tests whether the explanatory variable should be included Introduction to Linear Regression 155 in the model, and with a t ratio of 30.1698 this is certainly the case.


pages: 351 words: 123,876

Beautiful Testing: Leading Professionals Reveal How They Improve Software (Theory in Practice) by Adam Goucher, Tim Riley

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, barriers to entry, Black Swan, call centre, continuous integration, Debian, Donald Knuth, en.wikipedia.org, Firefox, Grace Hopper, index card, Isaac Newton, natural language processing, p-value, performance metric, revision control, six sigma, software as a service, software patent, the scientific method, Therac-25, Valgrind, web application

However, some bugs are more subtle, and so more sophisticated tests may be necessary. The recommendation is to start with the simplest tests and work up to more advanced tests. The simplest tests, besides being easiest to implement, are also the easiest to understand. A software developer is more likely to respond well to being told, “Looks like the average of your generator is 7 when it should be 8,” than to being told, “I’m getting a small p-value from my Kolmogorov-Smirnov test.” Range Tests If a probability distribution has a limited range, the simplest thing to test is whether the output values fall in that range. For example, an exponential distribution produces only positive values. If your test detects a single negative value, you’ve found a bug. However, for other distributions, such as the normal, there are no theoretical bounds on the outputs; all output values are possible, though some values are exceptionally unlikely. 134 CHAPTER TEN There is one aspect of output ranges that cannot be tested effectively by black-box testing: boundary values.


pages: 336 words: 163,867

How to Diagnose and Fix Everything Electronic by Michael Geier

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

p-value, popular electronics, remote working

If you find no voltage at all, there could be a little sub-regulator on the board to power the micro, and it might be bad. Chapter 11  A-Hunting We Will Go: Signal Tracing and Diagnosis 201 If you see voltage there (typically 5 volts, but possibly less and very occasionally more) but no oscillation, the crystal may be dead. Without a clock to drive it, the micro will sit there like a rock. If you do see oscillation, check that its peak-to-peak (p-p) value is fairly close to the total power supply voltage running the micro. If it’s a 5-volt micro and the oscillation is 1 volt p-p, the micro won’t get clocked. If you have power and a running micro, you should see some life someplace. Lots of products include small backup batteries on their boards. See Figure 11-1. These batteries keep the clock running and preserve user preferences. Loss of battery power causes resetting of data to the default states but doesn’t prevent the product from working.


pages: 385 words: 118,901

Black Edge: Inside Information, Dirty Money, and the Quest to Bring Down the Most Wanted Man on Wall Street by Sheelah Kolhatkar

Bernie Madoff, Donald Trump, family office, fear of failure, financial deregulation, hiring and firing, income inequality, light touch regulation, locking in a profit, margin call, medical residency, mortgage debt, p-value, pets.com, Ponzi scheme, rent control, Ronald Reagan, short selling, Silicon Valley, Skype, The Predators' Ball

He was sure that some of his patients had benefited from the drug, and he hoped they’d be able to continue taking it. He told Martoma that in spite of the negative results, he was still hopeful that bapi might work, because he had observed some improvements in his own patients who were taking it. “I don’t know how you can say that when the statistical evidence shows otherwise,” Martoma said. He cited the exact p-values, a number that indicated whether a result was statistically significant or not, and a handful of other specific figures that had just been included in the presentation to the investigators. The results still hadn’t been publicly released. Ross was flabbergasted. How could Martoma possibly know about those details? It was as if Martoma had seen the presentation he had just seen. But he knew that was impossible.


pages: 968 words: 224,513

The Art of Assembly Language by Randall Hyde

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Donald Knuth, p-value, sorting algorithm, Von Neumann architecture, Y2K

For example: static p:procedure( i:int32; c:char ) := &SomeProcedure; Note that SomeProcedure must be a procedure whose parameter list exactly matches p's parameter list (i.e., two value parameters, the first is an int32 parameter and the second is a char parameter). To indirectly call this procedure, you could use either of the following sequences: push( Value_for_i ); push( Value_for_c ); call( p ); or p( Value_for_i, Value_for_c ); The high-level language syntax has the same features and restrictions as the high-level syntax for a direct procedure call. The only difference is the actual call instruction HLA emits at the end of the calling sequence. Although all the examples in this section use static variable declarations, don't get the idea that you can declare simple procedure pointers only in the static or other variable declaration sections.


pages: 1,606 words: 168,061

Python Cookbook by David Beazley, Brian K. Jones

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

don't repeat yourself, Firefox, Guido van Rossum, iterative process, p-value, web application

Solution The ctypes module can be used to create Python callables that wrap around arbitrary memory addresses. The following example shows how to obtain the raw, low-level address of a C function and how to turn it back into a callable object: >>> import ctypes >>> lib = ctypes.cdll.LoadLibrary(None) >>> # Get the address of sin() from the C math library >>> addr = ctypes.cast(lib.sin, ctypes.c_void_p).value >>> addr 140735505915760 >>> # Turn the address into a callable function >>> functype = ctypes.CFUNCTYPE(ctypes.c_double, ctypes.c_double) >>> func = functype(addr) >>> func <CFunctionType object at 0x1006816d0> >>> # Call the resulting function >>> func(2) 0.9092974268256817 >>> func(0) 0.0 >>> Discussion To make a callable, you must first create a CFUNCTYPE instance. The first argument to CFUNCTYPE() is the return type.


Adaptive Markets: Financial Evolution at the Speed of Thought by Andrew W. Lo

Albert Einstein, Alfred Russel Wallace, algorithmic trading, Andrei Shleifer, Arthur Eddington, Asian financial crisis, asset allocation, asset-backed security, backtesting, bank run, barriers to entry, Berlin Wall, Bernie Madoff, bitcoin, Bonfire of the Vanities, bonus culture, break the buck, Brownian motion, business process, butterfly effect, capital asset pricing model, Captain Sullenberger Hudson, Carmen Reinhart, Chance favours the prepared mind, collapse of Lehman Brothers, collateralized debt obligation, commoditize, computerized trading, corporate governance, creative destruction, Credit Default Swap, credit default swaps / collateralized debt obligations, cryptocurrency, Daniel Kahneman / Amos Tversky, delayed gratification, Diane Coyle, diversification, diversified portfolio, double helix, easy for humans, difficult for computers, Ernest Rutherford, Eugene Fama: efficient market hypothesis, experimental economics, experimental subject, Fall of the Berlin Wall, financial deregulation, financial innovation, financial intermediation, fixed income, Flash crash, Fractional reserve banking, framing effect, Gordon Gekko, greed is good, Hans Rosling, Henri Poincaré, high net worth, housing crisis, incomplete markets, index fund, interest rate derivative, invention of the telegraph, Isaac Newton, James Watt: steam engine, job satisfaction, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Meriwether, Joseph Schumpeter, Kenneth Rogoff, London Interbank Offered Rate, Long Term Capital Management, loss aversion, Louis Pasteur, mandelbrot fractal, margin call, Mark Zuckerberg, market fundamentalism, martingale, merger arbitrage, meta analysis, meta-analysis, Milgram experiment, money market fund, moral hazard, Myron Scholes, Nick Leeson, old-boy network, out of africa, p-value, paper trading, passive investing, Paul Lévy, Paul Samuelson, Ponzi scheme, predatory finance, prediction markets, price discovery process, profit maximization, profit motive, quantitative hedge fund, quantitative trading / quantitative finance, RAND corporation, random walk, randomized controlled trial, Renaissance Technologies, Richard Feynman, Richard Feynman, Richard Feynman: Challenger O-ring, risk tolerance, Robert Shiller, Robert Shiller, short selling, sovereign wealth fund, statistical arbitrage, Steven Pinker, stochastic process, survivorship bias, The Great Moderation, the scientific method, The Wealth of Nations by Adam Smith, The Wisdom of Crowds, theory of mind, Thomas Malthus, Thorstein Veblen, Tobin tax, too big to fail, transaction costs, Triangle Shirtwaist Factory, ultimatum game, Upton Sinclair, US Airways Flight 1549, Walter Mischel, Watson beat the top human players on Jeopardy!, WikiLeaks, Yogi Berra, zero-sum game

More precisely, the variance of random walk increments is linear in the time interval of the increment. See Lo and MacKinlay (1988) for details. 4. Of course, the expected payoff of most investments also increases with the investment horizon, enough to entice many to be long-term investors. We’ll come back to this issue later in chapter 8 when we explore the strange world of hedge funds, but for now let’s focus on the variance. 5. The p-value of a z-score of 7.51 is 2.9564×10−14. Th is result was based on an equally weighted index of all stocks traded on the New York, American, and NASDAQ stock exchanges during our sample. When we applied our test to a value-weighted version of that stock index—one where larger stocks received proportionally greater weight—the rejection was less dramatic but still compelling: the odds of the Random Walk Hypothesis in this case were slightly less than 1 out of 100. 6.


pages: 1,042 words: 266,547

Security Analysis by Benjamin Graham, David Dodd

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

activist fund / activist shareholder / activist investor, asset-backed security, backtesting, barriers to entry, capital asset pricing model, carried interest, collateralized debt obligation, collective bargaining, corporate governance, corporate raider, credit crunch, Credit Default Swap, credit default swaps / collateralized debt obligations, diversification, diversified portfolio, fear of failure, financial innovation, fixed income, full employment, index fund, intangible asset, invisible hand, Joseph Schumpeter, locking in a profit, Long Term Capital Management, low cost carrier, moral hazard, mortgage debt, Myron Scholes, p-value, Right to Buy, risk-adjusted returns, risk/return, secular stagnation, shareholder value, The Chicago School, the market place, the scientific method, The Wealth of Nations by Adam Smith, transaction costs, zero-coupon bond

In a theoretical sense this is entirely true, but in practice it may not be true at all, because a division of capitalization into senior securities and common stock may have a real advantage over a single common-stock issue. This subject will receive extended treatment under the heading of “Capitalization Structure” in Chap. 40. The distinction between the idea just suggested and our “rule of maximum valuation” may be clarified as follows: 1. Assume Company X = Company Y 2. Company X has preferred (P) and common (C); Company Y has common only (C') 3. Then it would appear that Value of P + value of C = value of C' since each side of the equation represents equal things, namely the total value of each company. But this apparent relationship may not hold good in practice because the preferred-and-common capitalization method may have real advantages over a single common-stock issue. On the other hand, our “rule of maximum valuation” merely states that the value of P alone cannot exceed value of C'.


pages: 892 words: 91,000

Valuation: Measuring and Managing the Value of Companies by Tim Koller, McKinsey, Company Inc., Marc Goedhart, David Wessels, Barbara Schwimmer, Franziska Manoury

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

activist fund / activist shareholder / activist investor, air freight, barriers to entry, Basel III, BRICs, business climate, business process, capital asset pricing model, capital controls, Chuck Templeton: OpenTable, cloud computing, commoditize, compound rate of return, conceptual framework, corporate governance, corporate social responsibility, creative destruction, credit crunch, Credit Default Swap, discounted cash flows, distributed generation, diversified portfolio, energy security, equity premium, fixed income, index fund, intangible asset, iterative process, Long Term Capital Management, market bubble, market friction, meta analysis, meta-analysis, Myron Scholes, negative equity, new economy, p-value, performance metric, Ponzi scheme, price anchoring, purchasing power parity, quantitative easing, risk/return, Robert Shiller, Robert Shiller, shareholder value, six sigma, sovereign wealth fund, speech recognition, survivorship bias, technology bubble, time value of money, too big to fail, transaction costs, transfer pricing, value at risk, yield curve, zero-coupon bond

At the end of the research phase, there are three possible outcomes: success combined with an increase in the value of a marketable drug to $5,594 million, success combined with a decrease 27 The formula for estimating the upward probability is: (1 + k)T − d 1.073 − 0.77 = = 0.86 u−d 1.30 − 0.77 where k is the expected return on the asset. 816 FLEXIBILITY EXHIBIT 35.18 Decision Tree: R&D Option with Technological and Commercial Risk $ million Technological risk event Commercial risk event Research phase TTesting phase Marketing VValue up 74% NPV = 7,104 1 – q* = 26% V lue down Va NPV = 4,164 q* = p= Value up V q* = 74% Success 40% NPV = 1,936 1–p= 60% Failure NPV = 0 Success p = 15% 1 – q* = 26% Value down V 1 – p = 85% Failure r VValue up 74% NPV = 4,164 1 – q* = 26% V lue down Va NPV = 2,416 q* = p= NPV = 120 Success 40% NPV = 1,029 1–p= NPV = 0 Decision event 60% Failure NPV = 0 Note: NPV = net present value of project q* = binomial (risk-neutral) probability of an increase in marketable drug value p = probability of technological success in the value of a marketable drug to $3,327 million, and failure leading to a drug value of $0.