51 results back to index
pages: 719 words: 104,316 
R Cookbook by Paul Teetor Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
Debian, en.wikipedia.org, pvalue, quantitative trading / quantitative ﬁnance, statistical model However, it produces an annoying warning message, shown here at the bottom of the output, when the pvalue is below 0.01: > library(tseries) > adf.test(x) Augmented DickeyFuller Test data: x DickeyFuller = 4.3188, Lag order = 4, pvalue = 0.01 alternative hypothesis: stationary Warning message: In adf.test(x) : pvalue smaller than printed pvalue Fortunately, I can muzzle the function by calling it inside suppressWarnings(...): > suppressWarnings(adf.test(x)) Augmented DickeyFuller Test data: x DickeyFuller = 4.3188, Lag order = 4, pvalue = 0.01 alternative hypothesis: stationary Notice that the warning message disappeared. The message is not entirely lost because R retains it internally. I can retrieve the message at my leisure by using the warnings function: > warnings() Warning message: In adf.test(x) : pvalue smaller than printed pvalue Some functions also produce “messages” (in R terminology), which are even more benign than warnings. … Solution Use the table function to produce a contingency table from the two factors. Then use the summary function to perform a chisquared test of the contingency table: > summary(table(fac1,fac2)) The output includes a pvalue. Conventionally, a pvalue of less than 0.05 indicates that the variables are likely not independent whereas a pvalue exceeding 0.05 fails to provide any such evidence. Discussion This example performs a chisquared test on the contingency table of Recipe 9.3 and yields a pvalue of 0.01255: > summary(table(initial,outcome)) Number of cases in table: 100 Number of factors: 2 Test for independence of all factors: Chisq = 8.757, df = 2, pvalue = 0.01255 The small pvalue indicates that the two factors, initial and outcome, are probably not independent. Practically speaking, we conclude there is some connection between the variables. … Do you notice the extreme righthand column containing double asterisks (**), a single asterisk (*), and a period(.)? That column highlights the significant variables. The line labeled "Signif. codes" at the bottom gives a cryptic guide to the flags’ meanings: *** pvalue between 0 and 0.001 ** pvalue between 0.001 and 0.01 * pvalue between 0.01 and 0.05 . pvalue between 0.05 and 0.1 (blank) pvalue between 0.1 and 1.0 The column labeled Std. Error is the standard error of the estimated coefficient. The column labeled t value is the t statistic from which the pvalue was calculated. Residual standard error Residual standard error: 1.625 on 26 degrees of freedom This reports the standard error of the residuals (σ)—that is, the sample standard deviation of ε. R2 (coefficient of determination) Multiple Rsquared: 0.4981, Adjusted Rsquared: 0.4402 R2 is a measure of the model’s quality. 

Beginning R: The Statistical Programming Language by Mark Gardener Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
correlation coefficient, distributed generation, natural language processing, New Urbanism, pvalue, statistical model You might prefer to display the values as whole numbers and you can adjust the output “on the fly” by using the round() command to choose how many decimal points to display the values like so: > round(bird.cs$exp, 0) Garden Hedgerow Parkland Pasture Woodland Blackbird 60 11 24 4 2 Chaffinch 17 3 7 1 1 Great Tit 40 7 16 3 1 House Sparrow 44 8 17 3 2 Robin 8 2 3 1 0 Song Thrush 6 1 2 0 0 In this instance you chose to use no decimals at all and so use 0 as an instruction in the round() command. Monte Carlo Simulation You can decide to determine the pvalue by a slightly different method and can use a Monte Carlo simulation to do this. You add an extra instruction to the chisq.test() command, simulate.p.value = TRUE, like so: > chisq.test(bird.df, simulate.p.value = TRUE, B = 2500) Pearson's Chisquared test with simulated pvalue (based on 2500 replicates) data: bird.df Xsquared = 78.2736, df = NA, pvalue = 0.0003998 The default is that simulate.p.value = FALSE and that B = 2000. The latter is the number of replicates to use in the Monte Carlo test, which is set to 2500 for this example. Yates’ Correction for 2 × 2 Tables When you have a 2 × 2 contingency table it is common to apply the Yates’ correction. … Now run the chisquared test again but this time use a Monte Carlo simulation with 3000 replicates to determine the pvalue: > (bees.cs = chisq.test(bees, simulate.p.value = TRUE, B = 3000)) Pearson's Chisquared test with simulated pvalue (based on 3000 replicates) data: bees Xsquared = 120.6531, df = NA, pvalue = 0.0003332 4. Look at a portion of the data as a 2 × 2 contingency table. Examine the effect of Yates’ correction on this subset: > bees[1:2, 4:5] Honey.bee Carder.bee Thistle 12 8 Vipers.bugloss 13 27 > chisq.test(bees[1:2, 4:5], correct = FALSE) Pearson's Chisquared test data: bees[1:2, 4:5] Xsquared = 4.1486, df = 1, pvalue = 0.04167 > chisq.test(bees[1:2, 4:5], correct = TRUE) Pearson's Chisquared test with Yates' continuity correction data: bees[1:2, 4:5] Xsquared = 3.0943, df = 1, pvalue = 0.07857 5. Look at the last two columns, representing two bee species. Carry out a goodness of fit test to determine if the proportions of visits are the same: > with(bees, chisq.test(Honey.bee, p = Carder.bee, rescale = T)) Chisquared test for given probabilities data: Honey.bee Xsquared = 58.088, df = 4, pvalue = 7.313e12 Warning message: In chisq.test(Honey.bee, p = Carder.bee, rescale = T) : Chisquared approximation may be incorrect 6. … Carry out a goodness of fit test to determine if the proportions of visits are the same: > with(bees, chisq.test(Honey.bee, p = Carder.bee, rescale = T)) Chisquared test for given probabilities data: Honey.bee Xsquared = 58.088, df = 4, pvalue = 7.313e12 Warning message: In chisq.test(Honey.bee, p = Carder.bee, rescale = T) : Chisquared approximation may be incorrect 6. Carry out the same goodness of fit test but use a simulation to determine the pvalue (you can abbreviate the command): > with(bees, chisq.test(Honey.bee, p = Carder.bee, rescale = T, sim = T)) Chisquared test for given probabilities with simulated pvalue (based on 2000 replicates) data: Honey.bee Xsquared = 58.088, df = NA, pvalue = 0.0004998 7. Now look at a single column and carry out a goodness of fit test. This time omit the p = instruction to test the fit to equal probabilities: > chisq.test(bees$Honey.bee) Chisquared test for given probabilities data: bees$Honey.bee Xsquared = 2.5, df = 4, pvalue = 0.6446 How It Works The basic form of the chisq.test() command will operate on a matrix or data frame. 
pages: 443 words: 51,804 
Handbook of Modeling HighFrequency Data in Finance by Frederi G. Viens, Maria C. Mariani, Ionut Florescu Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
algorithmic trading, asset allocation, automated trading system, backtesting, BlackScholes formula, Brownian motion, business process, continuous integration, corporate governance, discrete time, distributed generation, fixed income, Flash crash, housing crisis, implied volatility, incomplete markets, linear programming, mandelbrot fractal, market friction, market microstructure, martingale, Menlo Park, pvalue, pattern recognition, performance metric, principal–agent problem, random walk, risk tolerance, risk/return, short selling, statistical model, stochastic process, stochastic volatility, transaction costs, value at risk, volatility smile, Wiener process However, we could clearly see them in the ﬁgures obtained using the DFA method. 142 CHAPTER 6 Long Correlations Applied to the Study of Memory EEM:2003–2009 Cumulative distribution Cumulative distribution EEM:2003–2009 100 10–2 10–4 10 –1 0 1 100 10–2 10–4 10–1 100 101 Normalized returns (T = 4, α = 1.60) 10 10 Normalized returns (T = 1, α = 1.50) EEM:2003–2009 Cumulative distribution Cumulative distribution EEM:2003–2009 100 10–2 10–4 10–1 100 101 Normalized returns (T = 8, α = 1.60) 100 10–2 10–4 10–1 100 101 Normalized returns (T = 16, α = 1.60) DFA analysis:EEM 2003–2009 −2 −2.5 log(Fn) −3 −3.5 −4 −4.5 −5 −5.5 1 1.5 2 log(n) 2.5 3 [α = 0.74338] 3.5 4 3.5 4 Hurst analysis:EEM 2003–2009 2.2 2 1.8 1.6 log(R/S) 1.4 1.2 1 0.8 0.6 0.4 0.2 1 1.5 2 2.5 3 log(n) [H = 0.57794] FIGURE 6.5 Analysis results for EEM index using the entire period available. 143 S&P 500: 2001–2009 Cumulative distribution Cumulative distribution 6.4 Results and Discussions 100 10–2 10–4 S&P 500: 2001–2009 100 10–2 10–4 –1 10 0 1 10 10 Normalized returns (T = 8, α = 1.40) Cumulative distribution Cumulative distribution 10–1 100 101 Normalized returns (T = 1, α = 1.55) S&P 500: 2001–2009 100 10–2 10–4 10–1 100 101 Normalized returns (T = 4, α = 1.50) S&P 500: 2001–2009 0 10 10–2 10–4 10–1 100 101 Normalized returns (T = 16, α = 1.40) DFA analysis:S&P 500, 2001–2009 −3.5 log(Fn) −4 −4.5 −5 −5.5 −6 1 1.5 2 2.5 3 log(n) [α = 0.67073] 3.5 4 3.5 4 Hurst analysis: S&P 500, 2001–2009 2 1.8 1.6 log(R/S) 1.4 1.2 1 0.8 0.6 0.4 0.2 1 1.5 2 2.5 3 log(n) [H = 0.56657] FIGURE 6.6 Analysis results for S&P500 index using the entire period available. 144 CHAPTER 6 Long Correlations Applied to the Study of Memory iShares MSCI EAFE Index (EFA) Kohmogorov S.Statistic Normal 99.9 99.9 99 99 95 90 80 70 60 50 40 30 20 10 5 95 90 80 70 60 50 40 30 20 10 5 1 Mean 0.0005256 StDev 0.004476 N 252 AD 0.444 PValue 0.283 Percent Percent iShares MSCI EAFE Index (EFA) Anderson D.Statistic Normal 1 0.1 0.1 –0.015 –0.010 –0.005 0.000 0.005 0.010 (EFA) 1/3/03 to 1/2/04 0.015 –0.015 –0.010 –0.005 0.000 0.005 0.010 (EFA) 1/3/03 to 1/2/04 99.9 99 99 95 90 80 70 60 50 40 30 20 10 5 95 90 80 70 60 50 40 30 20 10 5 Percent Percent 99.9 Mean 0.0005256 StDev 0.004476 N 252 AD 0.444 PValue 0.283 1 0.1 –0.015 –0.010 –0.005 0.000 0.005 0.010 0.015 Sp500(827) 1/2/03 to 12/31/03 Kohmogorov S. for Sp500(827) 1/2/2003 until 12/31/2003 Normal RyanJ. for Sp500(827) 1/2/2003 until 12/31/2003 Normal 99.9 99.9 99 99 95 90 80 70 60 50 40 30 20 10 5 95 90 80 70 60 50 40 30 20 10 5 Mean 0.0004035 StDev 0.004663 N 252 RJ 0.995 PValue 0.094 Percent Percent Mean 0.0004035 0.004663 StDev N 252 0.418 AD 0.327 PValue 0.1 –0.015 –0.010 –0.005 0.000 0.005 0.010 0.015 (EFA) 1/3/03 to 1/2/04 1 0.015 Anderson D. for Sp500(827) 1/2/2003 until 12/31/2003 Normal Probability Plot of (EFA) 1/3/03 to 1/2/04 Normal–95%CI 1 Mean 0.0005256 StDev 0.004476 N 252 0.055 KS PValue 0.064 1 0.1 Mean 0.0004035 StDev 0.004663 N 252 KS 0.039 PValue >0.150 0.1 –0.015 –0.010 –0.005 0.000 0.005 0.010 0.015 –0.015 –0.010 –0.005 0.000 0.005 0.010 0.015 Sp500(827) 1/2/03 to 12/31/03 Sp500(827) 1/2/03 to 12/31/03 MSCI_(EEM) from 4/15/2003 until 12/31/2003 Anderson D. Normal Empirical CDF of EEM 4/15/03 to 12/31/03 Normal 99.9 100 95 90 80 70 60 50 40 30 20 10 5 1 80 Mean 0.001153 StDev 0.004840 N 180 AD 0.272 PValue 0.668 Percent Percent 99 60 40 20 0 Mean StDev N 0.001153 0.004840 180 0.1 –0.015 –0.010 –0.005 0.000 0.005 0.010 0.015 0.020 EEM 4/15/03 to 12/31/03 –0.010 –0.005 0.000 0.005 0.010 EEM 4/15/03 to 12/31/03 FIGURE 6.7 Several normality tests for 2003 using the three indices. 0.015 145 6.4 Results and Discussions TABLE 6.10 Dow Jones Index and its Components: pValue of the ADF and PP Tests of Unit Root, H Exponent and α Exponent Calculated Using R/S and DFA Analysis for All Components and Index Symbol Company ADF PP DJI AA AIG AXP BA C CAT DD DIS GE GM HD HON HPQ IBM INTC JNJ JPM KO MCD MMM MO MRK MSFT PFE PG T UTX VZ WMT XOM Dow Jones Industrial Ave ALCOA AMER INTL GROUP AMER EXPRESS BOEING CITIGROUP CATERPILLAR DU PONT E I DE NEM WALT DISNEYDISNEY GEN ELECTRIC GEN MOTORS HOME DEPOT HONEYWELL INTL HEWLETT PACKARD INTL BUSINESS MACH INTEL CP JOHNSON AND JOHNS DC JP MORGAN CHASE COCA COLA MCDONALDS CP 3M ALTRIA GROUP MERCK MICROSOFT CP PFIZER PROCTER GAMBLE AT&T. … It is worth mentioning that while the stationarity tests reject the presence of the unit root in the characteristic polynomial that does not necessarily mean that the data is stationary, only that the particular type of nonstationarity indicated 1.0 Emp rescaled stock(x) 1.0 Emp stock0(x) 0.8 0.6 0.4 0.2 0.0 –1e 0.8 0.6 0.4 0.2 0.0 –03 –5e–04 0e+00 x 5e–04 1e–03 x FIGURE 6.1 Plot of the empirical CDF of the returns for Stock 1. (a) The image contains the original CDF. (b) The image is the same empirical CDF but rescaled so that the discontinuities are clearly seen. 130 CHAPTER 6 Long Correlations Applied to the Study of Memory TABLE 6.1 DFA and Hurst Analysis Data ADF (pvalue) PP (pvalue) KPSS (pvalue) DFA HURST Stock 1 <0.01 <0.01 >0.1 Stock 2 <0.01 <0.01 >0.1 Stock 3 <0.01 <0.01 >0.1 Stock 4 <0.01 <0.01 >0.1 Stock 5 <0.01 <0.01 >0.1 Stock 6 <0.01 <0.01 >0.1 Stock 7 <0.01 <0.01 >0.1 Stock 8 <0.01 <0.01 >0.1 Stock 9 <0.01 <0.01 >0.1 Stock 10 <0.01 <0.01 0.07686 Stock 11 <0.01 <0.01 >0.1 Stock 12 <0.01 <0.01 >0.1 Stock 13 <0.01 <0.01 >0.1 Stock 14 <0.01 <0.01 >0.1 Stock 15 <0.01 <0.01 >0.1 Stock 16 <0.01 <0.01 >0.1 Stock 17 <0.01 <0.01 >0.1 Stock 18 <0.01 <0.01 0.076 Stock 19 <0.01 <0.01 >0.1 Stock 20 <0.01 <0.01 >0.1 Stock 21 <0.01 <0.01 >0.1 Stock 22 <0.01 <0.01 >0.1 Stock 23 <0.01 <0.01 >0.1 Stock 24 <0.01 <0.01 >0.1 0.525178 0.007037 0.64812 0.01512 0.66368 0.01465 0.66969 0.01506 0.65525 0.02916 0.74206 0.01032 0.50432 0.01212 0.66184 0.01681 0.72729 0.01383 0.79322 0.01158 0.322432 0.007075 0.70352 0.01429 0.74889 0.02081 0.70976 0.01062 0.76746 0.01029 0.62549 0.01554 0.80534 0.02432 0.69134 0.01336 0.678050 0.009018 0.48603 0.01462 0.65553 0.02517 0.70807 0.01081 0.717223 0.009553 0.45403 0.01370 0.561643 0.005423 0.490789 0.006462 0.628440 0.006138 0.644534 0.005527 0.65044 0.02908 0.722893 0.008662 0.644820 0.008521 0.38046 0.01673 0.635075 0.006374 0.654970 0.006413 0.52485 0.01265 0.596178 0.007172 0.58279 0.00825 0.578053 0.007177 0.588555 0.004527 0.61023 0.01083 0.591336 0.006912 0.596003 0.001927 0.596190 0.005278 0.59426 0.01829 0.50115 0.01086 0.552367 0.009506 0.594051 0.006709 0.37129 0.02267 131 6.3 Data TABLE 6.1 (Continued) Data ADF (pvalue) PP (pvalue) KPSS (pvalue) DFA HURST Stock 25 <0.01 <0.01 0.02718 <0.01 <0.01 >0.1 0.63043 0.01200 0.59568 0.01464 0.646725 0.005784 0.51591 0.01586 Stock 26 Abbreviations: ADF, augmented Dickey–Fuller test for unitroot stationarity; PP, Phillips–Perron unitroot test; KPSS, Kwiatkowski–Phillips–Schmidt–ShinKwiatkowski–Phillips–Schmidt–Shin test for unitroot Stationarity; DFA, detrended ﬂuctuation analysis; Hurst, rescale range analysis. … (b) The image is the same empirical CDF but rescaled so that the discontinuities are clearly seen. 130 CHAPTER 6 Long Correlations Applied to the Study of Memory TABLE 6.1 DFA and Hurst Analysis Data ADF (pvalue) PP (pvalue) KPSS (pvalue) DFA HURST Stock 1 <0.01 <0.01 >0.1 Stock 2 <0.01 <0.01 >0.1 Stock 3 <0.01 <0.01 >0.1 Stock 4 <0.01 <0.01 >0.1 Stock 5 <0.01 <0.01 >0.1 Stock 6 <0.01 <0.01 >0.1 Stock 7 <0.01 <0.01 >0.1 Stock 8 <0.01 <0.01 >0.1 Stock 9 <0.01 <0.01 >0.1 Stock 10 <0.01 <0.01 0.07686 Stock 11 <0.01 <0.01 >0.1 Stock 12 <0.01 <0.01 >0.1 Stock 13 <0.01 <0.01 >0.1 Stock 14 <0.01 <0.01 >0.1 Stock 15 <0.01 <0.01 >0.1 Stock 16 <0.01 <0.01 >0.1 Stock 17 <0.01 <0.01 >0.1 Stock 18 <0.01 <0.01 0.076 Stock 19 <0.01 <0.01 >0.1 Stock 20 <0.01 <0.01 >0.1 Stock 21 <0.01 <0.01 >0.1 Stock 22 <0.01 <0.01 >0.1 Stock 23 <0.01 <0.01 >0.1 Stock 24 <0.01 <0.01 >0.1 0.525178 0.007037 0.64812 0.01512 0.66368 0.01465 0.66969 0.01506 0.65525 0.02916 0.74206 0.01032 0.50432 0.01212 0.66184 0.01681 0.72729 0.01383 0.79322 0.01158 0.322432 0.007075 0.70352 0.01429 0.74889 0.02081 0.70976 0.01062 0.76746 0.01029 0.62549 0.01554 0.80534 0.02432 0.69134 0.01336 0.678050 0.009018 0.48603 0.01462 0.65553 0.02517 0.70807 0.01081 0.717223 0.009553 0.45403 0.01370 0.561643 0.005423 0.490789 0.006462 0.628440 0.006138 0.644534 0.005527 0.65044 0.02908 0.722893 0.008662 0.644820 0.008521 0.38046 0.01673 0.635075 0.006374 0.654970 0.006413 0.52485 0.01265 0.596178 0.007172 0.58279 0.00825 0.578053 0.007177 0.588555 0.004527 0.61023 0.01083 0.591336 0.006912 0.596003 0.001927 0.596190 0.005278 0.59426 0.01829 0.50115 0.01086 0.552367 0.009506 0.594051 0.006709 0.37129 0.02267 131 6.3 Data TABLE 6.1 (Continued) Data ADF (pvalue) PP (pvalue) KPSS (pvalue) DFA HURST Stock 25 <0.01 <0.01 0.02718 <0.01 <0.01 >0.1 0.63043 0.01200 0.59568 0.01464 0.646725 0.005784 0.51591 0.01586 Stock 26 Abbreviations: ADF, augmented Dickey–Fuller test for unitroot stationarity; PP, Phillips–Perron unitroot test; KPSS, Kwiatkowski–Phillips–Schmidt–ShinKwiatkowski–Phillips–Schmidt–Shin test for unitroot Stationarity; DFA, detrended ﬂuctuation analysis; Hurst, rescale range analysis. 
pages: 589 words: 69,193 
Mastering Pandas by Femi Anthony Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
Amazon Web Services, correlation coefficient, correlation does not imply causation, Debian, en.wikipedia.org, Internet of things, natural language processing, pvalue, random walk, side project, statistical model In more formal terms, we would normally define a threshold or alpha value and reject the null hypothesis if the pvalue ≤ α or fail to reject otherwise. The typical values for α are 0.05 or 0.01. Following list explains the different values of alpha: pvalue <0.01: There is VERY strong evidence against H0 0.01 < pvalue < 0.05: There is strong evidence against H0 0.05 < pvalue < 0.1: There is weak evidence against H0 pvalue > 0.1: There is little or no evidence against H0 Therefore, in this case, we would reject the null hypothesis and give credence to Intelligenza's claim and state that their claim is highly significant. The evidence against the null hypothesis in this case is significant. There are two methods that we use to determine whether to reject the null hypothesis: The pvalue approach The rejection region approach The approach that we used in the preceding example was the latter one. … The alpha and pvalues In order to conduct an experiment to decide for or against our null hypothesis, we need to come up with an approach that will enable us to make the decision in a concrete and measurable way. To do this test of significance, we have to consider two numbers—the pvalue of the test statistic and the threshold level of significance, which is also known as alpha. The pvalue is the probability if the result we observe by assuming that the null hypothesis is true or it occurred by occurred by chance alone. The pvalue can also be thought of as the probability of obtaining a test statistic as extreme as or more extreme than the actual obtained test statistic, given that the null hypothesis is true. The alpha value is the threshold value against which we compare pvalues. This gives us a cutoff point in order to accept or reject the null hypothesis. … In general, the rule is as follows: If the pvalue is less than or equal to alpha (p< .05), then we reject the null hypothesis and state that the result is statistically significant. If the pvalue is greater than alpha (p > .05), then we have failed to reject the null hypothesis, and we say that the result is not statistically significant. The seemingly arbitrary values of alpha in usage are one of the shortcomings of the frequentist methodology, and there are many questions concerning this approach. The following article in the Nature journal highlights some of the problems: http://www.nature.com/news/scientificmethodstatisticalerrors1.14700. For more details on this topic, refer to: http://statistics.about.com/od/InferentialStatistics/a/WhatIsTheDifferenceBetweenAlphaAndPValues.htm http://bit.ly/1GzYX1P http://en.wikipedia.org/wiki/Pvalue Type I and Type II errors There are two type of errors, as explained here: Type I Error: In this type of error, we reject H0 when in fact H0 is true. 
pages: 579 words: 76,657 
Data Science from Scratch: First Principles with Python by Joel Grus Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
correlation does not imply causation, natural language processing, Netflix Prize, pvalue, Paul Graham, recommendation engine, SpamAssassin, statistical model One way to convince yourself that this is a sensible estimate is with a simulation: extreme_value_count = 0 for _ in range(100000): num_heads = sum(1 if random.random() < 0.5 else 0 # count # of heads for _ in range(1000)) # in 1000 flips if num_heads >= 530 or num_heads <= 470: # and count how often extreme_value_count += 1 # the # is 'extreme' print extreme_value_count / 100000 # 0.062 Since the pvalue is greater than our 5% significance, we don’t reject the null. If we instead saw 532 heads, the pvalue would be: two_sided_p_value(531.5, mu_0, sigma_0) # 0.0463 which is smaller than the 5% significance, which means we would reject the null. It’s the exact same test as before. It’s just a different way of approaching the statistics. Similarly, we would have: upper_p_value = normal_probability_above lower_p_value = normal_probability_below For our onesided test, if we saw 525 heads we would compute: upper_p_value(524.5, mu_0, sigma_0) # 0.061 which means we wouldn’t reject the null. If we saw 527 heads, the computation would be: upper_p_value(526.5, mu_0, sigma_0) # 0.047 and we would reject the null. Warning Make sure your data is roughly normally distributed before using normal_probability_above to compute pvalues. … In a situation like this, where n is much larger than k, we can use normal_cdf and still feel good about ourselves: def p_value(beta_hat_j, sigma_hat_j): if beta_hat_j > 0: # if the coefficient is positive, we need to compute twice the # probability of seeing an even *larger* value return 2 * (1  normal_cdf(beta_hat_j / sigma_hat_j)) else: # otherwise twice the probability of seeing a *smaller* value return 2 * normal_cdf(beta_hat_j / sigma_hat_j) p_value(30.63, 1.174) # ~0 (constant term) p_value(0.972, 0.079) # ~0 (num_friends) p_value(1.868, 0.131) # ~0 (work_hours) p_value(0.911, 0.990) # 0.36 (phd) (In a situation not like this, we would probably be using statistical software that knows how to compute the tdistribution, as well as how to compute the exact standard errors.) While most of the coefficients have very small pvalues (suggesting that they are indeed nonzero), the coefficient for “PhD” is not “significantly” different from zero, which makes it likely that the coefficient for “PhD” is random rather than meaningful. In more elaborate regression scenarios, you sometimes want to test more elaborate hypotheses about the data, such as “at least one of the is nonzero” or “ equals and equals ,” which you can do with an Ftest, which, alas, falls outside the scope of this book. … So a 5%significance test involves using normal_probability_below to find the cutoff below which 95% of the probability lies: hi = normal_upper_bound(0.95, mu_0, sigma_0) # is 526 (< 531, since we need more probability in the upper tail) type_2_probability = normal_probability_below(hi, mu_1, sigma_1) power = 1  type_2_probability # 0.936 This is a more powerful test, since it no longer rejects when X is below 469 (which is very unlikely to happen if is true) and instead rejects when X is between 526 and 531 (which is somewhat likely to happen if is true). === pvalues An alternative way of thinking about the preceding test involves pvalues. Instead of choosing bounds based on some probability cutoff, we compute the probability — assuming is true — that we would see a value at least as extreme as the one we actually observed. For our twosided test of whether the coin is fair, we compute: def two_sided_p_value(x, mu=0, sigma=1): if x >= mu: # if x is greater than the mean, the tail is what's greater than x return 2 * normal_probability_above(x, mu, sigma) else: # if x is less than the mean, the tail is what's less than x return 2 * normal_probability_below(x, mu, sigma) If we were to see 530 heads, we would compute: two_sided_p_value(529.5, mu_0, sigma_0) # 0.062 Note Why did we use 529.5 instead of 530? 

EvidenceBased Technical Analysis: Applying the Scientific Method and Statistical Inference to Trading Signals by David Aronson Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
Albert Einstein, Andrew Wiles, asset allocation, availability heuristic, backtesting, Black Swan, capital asset pricing model, cognitive dissonance, compound rate of return, Daniel Kahneman / Amos Tversky, distributed generation, Elliott wave, en.wikipedia.org, feminist movement, hindsight bias, index fund, invention of the telescope, invisible hand, Long Term Capital Management, mental accounting, meta analysis, metaanalysis, pvalue, pattern recognition, Ponzi scheme, price anchoring, price stability, quantitative trading / quantitative ﬁnance, Ralph Nelson Elliott, random walk, retrograde motion, revision control, risk tolerance, riskadjusted returns, riskless arbitrage, Robert Shiller, Robert Shiller, Sharpe ratio, short selling, statistical model, systematic trading, the scientific method, transfer pricing, unbiased observer, yield curve, Yogi Berra The value 0.10 is the sample statistic’s pvalue. This fact is equivalent to saying that if the rule’s true return were zero, there is a 0.10 probability that its return in a back test would attain a value as high as +3.5 percent or higher due to sampling variability (chance). This is illustrated in Figure 5.9. pvalue, Statistical Signiﬁcance, and Rejecting the Null Hypothesis A second name for the pvalue of the test statistic is the statistical significance of the test. The smaller the pvalue, the more statistically signiﬁcant the test result. A statistically signiﬁcant result is one for which the pvalue is low enough to warrant a rejection of H0. The smaller the pvalue of a test statistic, the more conﬁdent we can be that a rejection of the null hypothesis is a correct decision. The pvalue can be looked upon as the degree to which the observed value of the test statistic conforms to the null hypothesis (H0). … Said differently, a conditional probability is a probability that is conditional upon some other fact being true. In a hypothesis test, this conditional probability is given the special name pvalue. Speciﬁcally, it is the probability that the observed value of the test statistic could have occurred conditioned upon (given that) the hypothesis being tested (H0) is true. The smaller the pvalue, the greater is our justiﬁcation for calling into question the truth of H0. If the pvalue is less than a threshold, which must be deﬁned before the test is carried out, H0 is rejected and HA accepted. The pvalue can also be interpreted as the probability H0 will be erroneously rejected when H0 is in fact true. Pvalue also has a graphical interpretation. It is equal to the fraction of the sampling distribution’s total area that lies at values equal to and greater than the observed value of the test statistic. … The pvalue can be looked upon as the degree to which the observed value of the test statistic conforms to the null hypothesis (H0). Larger pvalues mean greater conformity, and smaller values mean less conformity. This is simply another way of saying that the more surprising (improbable) an observation is in relation to a given view of the world (the hypothesis), the more likely it is that world view is false. How small does the pvalue need to be to justify a rejection of the H0? This is problem speciﬁc and relates to the cost that would be incurred by an erroneous rejection. We will deal with the matter of errors and their costs in a moment. However, there are some standards that Null Hypothesis & Sampling Distribution Mean Return Test Statistic: +3.5% pvalue = 0.10 0 Area = 0.10 of total sampling distribution FIGURE 5.9 PValue: fractional area of sampling distribution greater than +3.5%, conditional probability of +3.5% or more given that H0 is true. 233 Hypothesis Tests and Confidence Intervals are commonly used. 

Everydata: The Misinformation Hidden in the Little Data You Consume Every Day by John H. Johnson Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
Affordable Care Act / Obamacare, Black Swan, business intelligence, Carmen Reinhart, cognitive bias, correlation does not imply causation, Daniel Kahneman / Amos Tversky, Donald Trump, en.wikipedia.org, Kenneth Rogoff, laborforce participation, lake wobegon effect, Long Term Capital Management, Mercator projection, Mercator projection distort size, especially Greenland and Africa, meta analysis, metaanalysis, Nate Silver, obamacare, pvalue, PageRank, pattern recognition, randomized controlled trial, riskadjusted returns, Ronald Reagan, statistical model, The Signal and the Noise by Nate Silver, Tim Cook: Apple, wikimedia commons, Yogi Berra It’s a measure of how probable it is that the effect we’re seeing is real (rather than due to chance occurrence), which is why it’s typically measured with a p‑value. P, in this case, stands for probability. If you accept p‑values as a measure of statistical significance, then the lower your p‑value is, the less likely it is that the results you’re seeing are due to chance alone.17 One oftaccepted measure of statistical significance is a p‑value of less than .05 (which equates to 5 percent probability). The widespread use of this threshold goes back to the 1920s, when it was popularized by Ronald Fisher, a mathematician who studied the effect of fertilizer on crops, among other things.18 Now, we’re not here to debate whether a p‑value of .05 is an appropriate standard for statistical significance, or even whether p‑values themselves are the right way to determine statistical significance.19 Instead, we’re here to tell you that p‑values—including the .05 threshold—are the standard in many applications. … The widespread use of this threshold goes back to the 1920s, when it was popularized by Ronald Fisher, a mathematician who studied the effect of fertilizer on crops, among other things.18 Now, we’re not here to debate whether a p‑value of .05 is an appropriate standard for statistical significance, or even whether p‑values themselves are the right way to determine statistical significance.19 Instead, we’re here to tell you that p‑values—including the .05 threshold—are the standard in many applications. And that’s why they matter to you. Because when you see an article about the latest scientific discovery, it’s quite likely that it has only been accepted by the scientific community— and reported by the media—because it has a p‑value below .05. It may seem somewhat arbitrary, but, as Derek Daniels, PhD (an associate professor at the University at Buffalo) told us, “having a line allows us to stay objective. If there’s no line, then we make a big deal out of a p‑value of 0.06 when it helps us, and we ignore a p‑value of 0.04 when it hurts us.”20 Take a Deep Breath Now let’s go back to the secondhand smoke study, and see what the research actually said—that passive smoking “did not statistically significantly increase lung cancer risk.” … ., a horse’s statistical odds of winning a race might be 1⁄3, which means it is probable that the horse will win one out of every three races; in betting jargon, the odds are typically the reverse, so this same horse would have 2–1 odds against, which means it has a 2⁄3 chance of losing) Omitted variable—A variable that plays a role in a relationship, but may be overlooked or otherwise not included; omitted variables are one of the primary reasons why correlation doesn’t equal causation Outlier—A particular observation that doesn’t fit; it may be much higher (or lower) than all the other data, or perhaps it just doesn’t fall into the pattern of everything else that you’re seeing P‑hacking—Named after p‑values, p‑hacking is a term for the practice of repeatedly analyzing data, trying to find ways to make nonsignificant results significant P‑value—A way to measure statistical significance; the lower your p‑value is, the less likely it is that the results you’re seeing are due to chance Population—The entire set of data or observations that you want to study and draw inferences about; statisticians rarely have the ability to look at the entire population in a study, although it could be possible with a small, welldefined group (e.g., the voting habits of all 100 U.S. senators) Prediction—See forecast Prediction error—A way to measure uncertainty in the future, essentially by comparing the predicted results to the actual outcomes, once they occur Prediction interval—The range in which we expect to see the next data point Probabilistic forecast—A forecast where you determine the probability of an outcome (e.g., there is a 30 percent chance of thunderstorms tomorrow) Probability—The likelihood (typically expressed as a percentage, fraction, or decimal) that an outcome will occur Proxy—A factor that you believe is closely related (but not identical) to another difficulttomeasure factor (e.g., IQ is a proxy for innate ability) Random—When an observed pattern is due to chance, rather than some observable process or event Risk—A term that can mean different things to different people; in general, risk takes into account not only the probability of an event, but also the consequences 221158 ixiv 1210 r4ga.indd 159 2/8/16 5:58:50 PM 160 Glossary Sample—Part of the full population (e.g., the set of Challenger launches with O‑ring failures) Sample selection—A potential statistical problem that arises when the way a sample has been chosen is directly related to the outcomes one is studying; also, sometimes used to describe the process of determining a sample from a population Sampling error—The uncertainty of not knowing if a sample represents the true value in the population or not Selection bias—A potential concern when a sample is comprised of those who chose to participate, a factor which may bias the results Spurious correlation—A statistical relationship between two factors that has no practical or economic meaning, or one that is driven by an omitted variable (e.g., the relationship between murder rates and ice cream consumption) Statistic—A numeric measure that describes an aspect of the data (e.g., a mean, a median, a mode) Statistical impact—Having a statistically significant effect of some undetermined size Statistical significance—A probabilitybased method to determine whether an observed effect is truly present in the data, or just due to random chance Summary statistic—Metric that provides information about one or more aspects of the data; averages and aggregated data are two examples of summary statistics Weighted average—An average calculated by assigning each value a weight (based on the value’s relative importance) 221158 ixiv 1210 r4ga.indd 160 2/8/16 5:58:50 PM No t e s Preface 1. 
pages: 408 words: 85,118 
Python for Finance by Yuxing Yan Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
assetbacked security, business intelligence, capital asset pricing model, constrained optimization, correlation coefficient, distributed generation, diversified portfolio, implied volatility, market microstructure, P = NP, pvalue, quantitative trading / quantitative ﬁnance, Sharpe ratio, time value of money, value at risk, volatility smile Then, we conduct two tests: test whether the mean is 0.5, and test whether the mean is zero: >>>from scipy import stats >>>np.random.seed(1235) >>>x = stats.norm.rvs(size=10000) >>>print("Tvalue Pvalue (twotail)") >>>print(stats.ttest_1samp(x,5.0)) >>>print(stats.ttest_1samp(x,0)) Tvalue Pvalue (twotail) [ 193 ] Statistical Analysis of Time Series (array(495.266783341032), 0.0) (array(0.26310321925083124), 0.79247644375164772) >>> For the first test, in which we test whether the time series has a mean of 0.5, we reject the null hypothesis since the Tvalue is 495.2 and the Pvalue is 0. For the second test, we accept the null hypothesis since the Tvalue is close to 0.26 and the Pvalue is 0.79. In the following program, we test whether the mean daily returns from IBM in 2013 is zero: from scipy import stats from matplotlib.finance import quotes_historical_yahoo ticker='ibm' begdate=(2013,1,1) enddate=(2013,11,9) p=quotes_historical_yahoo(ticker,begdate,enddate,asobject=True, adjusted=True) ret=(p.aclose[1:]  p.aclose[:1])/p.aclose[:1] print(' Mean Tvalue Pvalue ' ) print(round(mean(ret),5), stats.ttest_1samp(ret,0)) Mean Tvalue Pvalue (0.00024, (array(0.296271094280657), 0.76730904089713181)) From the previous results, we know that the average daily returns for IBM is 0.0024 percent. … In the following program, we test whether the mean daily returns from IBM in 2013 is zero: from scipy import stats from matplotlib.finance import quotes_historical_yahoo ticker='ibm' begdate=(2013,1,1) enddate=(2013,11,9) p=quotes_historical_yahoo(ticker,begdate,enddate,asobject=True, adjusted=True) ret=(p.aclose[1:]  p.aclose[:1])/p.aclose[:1] print(' Mean Tvalue Pvalue ' ) print(round(mean(ret),5), stats.ttest_1samp(ret,0)) Mean Tvalue Pvalue (0.00024, (array(0.296271094280657), 0.76730904089713181)) From the previous results, we know that the average daily returns for IBM is 0.0024 percent. The Tvalue is 0.29 while the Pvalue is 0.77. Thus, the mean is statistically not different from zero. Tests of equal means and equal variances Next, we test whether two variances for IBM and DELL in 2013 are equal or not. The function called sp.stats.bartlet performs Bartlett's test for equal variances with a null hypothesis that all input samples are from populations with equal variances. The outputs are Tvalue and Pvalue: import scipy as sp from matplotlib.finance import quotes_historical_yahoo begdate=(2013,1,1) enddate=(2013,11,9) def ret_f(ticker,begdate,enddate): [ 194 ] Chapter 8 p = quotes_historical_yahoo(ticker,begdate, enddate,asobject=True,ad justed=True) return((p.open[1:]  p.open[:1])/p.open[:1]) y=ret_f('IBM',begdate,enddate) x=ret_f('DELL',begdate,enddate) print(sp.stats.bartlett(x,y)) (5.1377132006045105, 0.023411467035559311) With a Tvalue of 5.13 and a Pvalue of 2.3 percent, we conclude that these two stocks will have different variances for their daily stock returns in 2013 if we choose a significant level of 5 percent. … The following is the related Python code: import numpy as np import statsmodels.api as sm import scipy as sp def breusch_pagan_test(y,x): results=sm.OLS(y,x).fit() resid=results.resid [ 356 ] Chapter 12 n=len(resid) sigma2 = sum(resid**2)/n f = resid**2/sigma2  1 results2=sm.OLS(f,x).fit() fv=results2.fittedvalues bp=0.5 * sum(fv**2) df=results2.df_model p_value=1sp.stats.chi.cdf(bp,df) return round(bp,6), df, round(p_value,7) sp.random.seed(12345) n=100 x=[] error1=sp.random.normal(0,1,n) error2=sp.random.normal(0,2,n) for i in range(n): if i%2==1: x.append(1) else: x.append(1) y1=x+np.array(x)+error1 y2=zeros(n) for i in range(n): if i%2==1: y2[i]=x[i]+error1[i] else: y2[i]=x[i]+error2[i] print ('y1 vs. x (we expect to accept the null hypothesis)') bp=breusch_pagan_test(y1,x) print('BP value, df, pvalue') print 'bp =', bp bp=breusch_pagan_test(y2,x) [ 357 ] Volatility Measures and GARCH print ('y2 vs. x (we expect to rject the null hypothesis)') print('BP value, df, pvalue') print('bp =', bp) For the result of running regression by using y1 against x, we know that its residual vale would be homogeneous, that is, variance or standard deviation is a constant. 

Analysis of Financial Time Series by Ruey S. Tsay Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
Asian financial crisis, asset allocation, BlackScholes formula, Brownian motion, capital asset pricing model, compound rate of return, correlation coefficient, data acquisition, discrete time, frictionless, frictionless market, implied volatility, index arbitrage, Long Term Capital Management, market microstructure, martingale, pvalue, pattern recognition, random walk, risk tolerance, short selling, statistical model, stochastic process, stochastic volatility, telemarketer, transaction costs, value at risk, volatility smile, Wiener process, yield curve The Ljung–Box statistics of the standardized shocks give Q(10) = 13.66 with pvalue 0.19, confirming that the mean equation is adequate. However, the Ljung–Box statistics for the squared standardized shocks show Q(10) = 23.83 with p value 0.008. The volatility equation is inadequate at the 5% level. We refine the model by considering an ARCH(2) model and obtain rt = 0.0225 + at . 2 2 σt2 = 0.0113 + 0.226at−1 + 0.108at−2 , (3.12) where the standard errors of the parameters are 0.006, 0.002, 0.135, and 0.094, 2 respectively. The coefficient of at−1 is marginally significant at the 10% level, but 2 that of at−2 is only slightly greater than its standard error. The Ljung–Box statistics for the squared standardized shocks give Q(10) = 8.82 with p value 0.55. Consequently, the fitted ARCH(2) model appears to be adequate. … The two sample ACFs are very close to each other, and they suggest that the serial correlations of monthly IBM stock returns are very small, if any. The sample ACFs are all within their two standarderror limits, indicating that they are not significant at the 5% level. In addition, for the simple returns, the Ljung–Box statistics give Q(5) = 5.4 and Q(10) = 14.1, which correspond to p value of 0.37 and 0.17, respectively, based on chisquared distributions with 5 and 10 degrees of freedom. For the log returns, we have Q(5) = 5.8 and Q(10) = 13.7 with p value 0.33 and 0.19, respectively. The joint tests confirm that monthly IBM stock returns have no significant serial correlations. Figure 2.2 shows the same for the monthly returns of the valueweighted index from the Center for Research in Security Prices (CRSP), University of Chicago. There are some significant serial correlations at the 5% level for both return series. … If a fitted model is found to be inadequate, it must be refined. Consider the residual series of the fitted AR(3) model for the monthly valueweighted simple returns. We have Q(10) = 15.8 with p value 0.027 based on its asymptotic chisquared distribution with 7 degrees of freedom. Thus, the null hypothesis of no residual serial correlation in the first 10 lags is rejected at the 5% level, but not at the 1% level. If the model is refined to an AR(5) model, then we have rt = 0.0092 + 0.107rt−1 − 0.001rt−2 − 0.123rt−3 + 0.028rt−4 + 0.069rt−5 + ât , with σ̂a = 0.054. The AR coefficients at lags 1, 3, and 5 are significant at the 5% level. The Ljung–Box statistics give Q(10) = 11.2 with p value 0.048. This model shows some improvements and appears to be marginally adequate at the 5% significance level. The mean of rt based on the refined model is also very close to 0.01, showing that the two models have similar longterm implications. 2.4.3 Forecasting Forecasting is an important application of time series analysis. 

Commodity Trading Advisors: Risk, Performance Analysis, and Selection by Greg N. Gregoriou, Vassilios Karavas, FrançoisSerge Lhabitant, Fabrice Douglas Rouah Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
Asian financial crisis, asset allocation, backtesting, capital asset pricing model, collateralized debt obligation, commodity trading advisor, compound rate of return, constrained optimization, corporate governance, correlation coefficient, Credit Default Swap, credit default swaps / collateralized debt obligations, discrete time, distributed generation, diversification, diversified portfolio, dividendyielding stocks, fixed income, high net worth, implied volatility, index arbitrage, index fund, interest rate swap, iterative process, linear programming, London Interbank Offered Rate, Long Term Capital Management, market fundamentalism, merger arbitrage, Mexican peso crisis / tequila crisis, pvalue, Ponzi scheme, quantitative trading / quantitative ﬁnance, random walk, riskadjusted returns, risk/return, Sharpe ratio, short selling, stochastic process, systematic trading, technology bubble, transaction costs, value at risk −6.7692 −3.9901 −5.1742 −4.8709 −4.2616 −4.1900 −5.3569 −4.3574 −5.3094 −4.7461 ADF Tests CTA Excess Returns: ARMA Models CTA Exc1 CTA Exc1: pvalue CTA Exc2 CTA Exc2: pvalue CTA Exc3 CTA Exc3: pvalue CTA Exc4 CTA Exc4: pvalue CTA Exc5 CTA Exc5: pvalue CTA Exc6 CTA Exc6: pvalue CTA Exc7 CTA Exc7: pvalue CTA Exc8 CTA Exc8: pvalue CTA Exc9 CTA Exc9: pvalue CTA Exc10 CTA Exc10: pvalue TABLE 21.4 0.00 0.01 0.11 0.07 0.03 0.12 0.07 0.01 R2 0.20 0.9248 0.16 0.0000 MA(4) B4 376 TABLE 21.5 PROGRAM EVALUATION, SELECTION, AND RETURNS CTA Returns, 2000 to 2003: ARMA Models M CTA3 CTA3: pvalue CTA4 CTA4: pvalue CTA8 CTA8: pvalue 0.0123 0.0895 0.0050 0.0288 0.0120 0.0831 AR(1) A1 AR(2) A2 −0.8042 0.0000 −0.5734 0.0126 −0.7018 0.0000 −0.6546 0.0000 0.0956 0.5748 −0.1482 0.3521 MA(1) A1 0.9994 0.0000 0.8731 0.0000 0.9529 0.0000 MA(2) A2 0.9800 0.0000 R2 0.16 0.04 0.09 there is a significant improvement for CTA #3 and #8 (evidenced by the increased R2). … As shown in Table 21.5, 374 0.0144 0.0006 0.0015 0.2288 0.0141 0.0055 0.0065 0.0000 0.0096 0.0034 0.0138 0.0005 0.0111 0.0000 0.0097 0.0000 0.0160 0.0003 0.0098 0.0006 M −0.8778 0.0000 −0.5106 0.0000 −0.7910 0.0000 −0.1493 0.1632 −0.9215 −0.0000 −0.4447 0.0000 −0.5618 0.0000 1.1231 0.0000 −0.8322 0.0000 −0.4473 −0.0000 −0.9249 0.0000 −0.8566 0.0000 0.9402 0.0000 −0.8977 0.0000 −1.5509 0.0000 −1.3294 0.0000 −0.1479 0.1546 AR(2) A2 AR(1) A1 −0.5811 0.0022 0.7482 0.0000 AR(3) A3 −0.2769 0.0039 AR(4) A4 0.4511 0.0000 0.6638 0.0000 −1.1684 0.0000 −0.8378 0.0000 0.5598 0.0000 0.9801 0.0000 0.9740 0.0000 −0.9814 0.0000 −1.1274 0.0000 1.3508 0.0000 MA(1) B1 0.9344 0.0000 1.0430 0.0000 0.9799 0.0000 −0.6249 0.0000 0.9799 0.0000 MA(2) B2 All ADF tests are at 99 percent confidence level. CTA3 rejects hypothesis of unit root at 90 percent. −4.5596 −5.1140 −5.4682 −4.7019 −4.9529 −4.9350 −5.7926 −3.4275 −5.6161 −5.6629 ADF Tests CTA Returns: ARMA Models CTA1 CTA1: pvalue CTA2 CTA2: pvalue CTA3 CTA3: pvalue CTA4 CTA4: pvalue CTA5 CTA5: pvalue CTA6 CTA6: pvalue CTA7 CTA7: pvalue CTA8 CTA8: pvalue CTA9 CTA9: pvalue CTA10 CTA10: pvalue TABLE 21.3 0.1529 0.0000 −0.9581 0.0000 MA(3) B3 0.15 0.20 0.06 0.03 0.01 0.15 0.03 0.06 0.31 0.04 R2 0.34 0.93 1.49 0.20 1.40 0.22 2.25 0.06 3.95 0.01 0.23 0.97 1.95 0.13 0.53 0.66 2.21 0.07 Chow FStat pvalue 375 0.0059 0.0259 −0.0043 0.0178 0.0051 0.3588 −0.0012 0.7033 0.0025 0.3855 0.0046 0.0473 0.0051 0.0572 0.0014 0.6276 0.0100 0.0000 0.0016 0.2435 M −0.7203 0.0191 −0.7132 0.0000 −0.5293 0.0000 −0.3677 0.0000 1.0716 0.0000 −0.5997 0.0000 −0.7890 0.0039 −0.4560 0.1771 0.5498 0.0000 0.7768 0.0000 AR(1) A1 0.9293 0.0000 −0.5202 0.0004 −0.7877 0.0000 −0.8945 0.0000 −0.7539 0.0000 −0.4724 0.0000 −0.5644 0.0376 AR(2) A2 AR(4) A4 MA(1) B1 MA(2) B2 MA(3) B3 0.7109 0.0262 −0.8592 0.0000 0.5947 0.9800 0.0000 0.0000 −0.4187 0.9617 0.0000 0.0000 −1.2220 0.9638 0.0000 0.0000 −0.7067 0.5722 0.5721 0.9661 0.0000 0.0000 0.0000 0.0000 −0.8271 0.6983 0.0009 0.0053 0.5768 0.0706 0.1356 −0.6643 −0.4929 − 1.0160 −0.4115 0.2974 0.0000 0.0000 0.0000 0.0000 −1.1091 0.3889 0.0000 0.0300 AR(3) A3 All ADF tests are at 99 percent confidence level. … The Spearman correlation coefficients show some ability to detect persistence when large TABLE 3.4 EGR Performance Persistence Results from Monte Carlo Generated Data Sets: No Persistence Present by Restricting a = 1 Data Generation Method Generated Data Subgroups Mean returns top 1/3 middle 1/3 bottom 1/3 top 3 bottom 3 pvalues rejectpositive z rejectnegative z fail to reject test of 2 means rejectpositive rejectnegative fail to reject 1a 2b 3c 1.25 1.25 1.25 1.25 1.26 1.25 1.25 1.22 1.15 1.19 0.70 0.72 0.68 0.61 0.68 0.021 0.028 0.951 0.041 0.037 0.922 0.041 0.039 0.920 0.026 0.028 0.946 0.032 0.020 0.948 0.032 0.026 0.942 generated using a = 1, b = .5; s = 2. generated using a = 1, b = .5; s = 5, 10, 15, 20. cData generated using a = 1, b = .5, 1, 1.5, 1; s = 5, 10, 15, 20. aData bData 37 Performance of Managed Futures TABLE 3.5 EGR Performance Persistence Results from Monte Carlo Generated Data Sets: Persistence Present by Allowing a to Vary Data Generation Method Generated Data Subgroups Mean returns top 1/3 middle 1/3 bottom 1/3 top 3 bottom 3 pvalues rejectpositive z rejectnegative z fail to reject.000 test of 2 means rejectpositive rejectnegative fail to reject.000 1a 2b 3c 4d 3.21 1.87 0.80 4.93 −1.60 2.77 2.09 1.41 3.47 1.14 2.57 1.85 1.15 3.26 0.86 1.48 1.30 1.14 1.68 1.06 1.000 0.000 0.000 0.827 0.000 0.173 0.823 0.000 0.177 0.149 0.003 0.848 1.00 0.000 0.000 0.268 0.000 0.732 0.258 0.000 0.742 0.043 0.012 0.945 generated using a = N(1.099,4.99); b = .5, 1, 1.5, 2; s = 2. generated using a = N(1.099,4.99); b = .5; s = 5, 10, 15, 20. cData generated using a = N(1.099,4.99); b = .5, 1, 1.5, 2; s = 5, 10, 15, 20. dData generated using a = N(1.099,1); b = .5, 1, 1.5, 2; s = 5, 10, 15, 20. aData bData differences are found in CTA data. 
pages: 284 words: 79,265 
The HalfLife of Facts: Why Everything We Know Has an Expiration Date by Samuel Arbesman Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
Albert Einstein, Alfred Russel Wallace, Amazon Mechanical Turk, Andrew Wiles, bioinformatics, British Empire, Chelsea Manning, Clayton Christensen, cognitive bias, cognitive dissonance, conceptual framework, David Brooks, demographic transition, double entry bookkeeping, double helix, Galaxy Zoo, guest worker program, Gödel, Escher, Bach, Ignaz Semmelweis: hand washing, index fund, invention of movable type, Isaac Newton, John Harrison: Longitude, Kevin Kelly, life extension, meta analysis, metaanalysis, Milgram experiment, Nicholas Carr, pvalue, Paul Erdős, Pluto: dwarf planet, randomized controlled trial, Richard Feynman, Richard Feynman, Rodney Brooks, social graph, social web, text mining, the scientific method, Thomas Kuhn: the structure of scientific revolutions, Thomas Malthus, Tyler Cowen: Great Stagnation On the other hand, imagine if we had gathered a much larger group and still had the same fractions: Out of 500 lefthanders, 300 carried L, while out of 500 righthanders, only 220 were carriers for L. If we ran the exact same test, we get a much lower pvalue. Now it’s less than 0.0001. This means that there is less than one hundredth of 1 percent chance that the differences are due to chance alone. The larger the sample we get, the better we can test our questions. The smaller the pvalue, the more robust our findings. But to publish a result in a scientific journal, you don’t need a minuscule pvalue. In general, you need a pvalue less than 0.05 or, sometimes, 0.01. For 0.05, this means that there is a one in twenty probability that the result being reported is in fact not real! Comic strip writer Randall Munroe illustrated some of the failings of this threshold for scientific publication: The comic shows some scientists testing whether jelly beans cause acne. … IF you ever delve a bit below the surface when reading about a scientific result, you will often bump into the term pvalue. Pvalues are an integral part of determining how new knowledge is created. More important, they give us a way of estimating the possibility of error. Anytime a scientist tries to discover something new or validate an exciting and novel hypothesis, she tests it against something else. Specifically, our scientist tests it against a version of the world where the hypothesis would not be true. This state of the world, where our intriguing hypothesis is not true and all that we see is exactly just as boring as we pessimistically expect, is known as the null hypothesis. Whether the world conforms to our exciting hypothesis or not can be determined by pvalues. Let’s use an example. Imagine we think that a certain form of a gene—let’s call it L—is more often found in lefthanded people than in righthanded people, and is therefore associated with lefthandedness. … The science of statistics is designed to answer this question by asking it in a more precise fashion: What is the chance that there actually is an equal frequency of lefthanders with L and righthanders with L, but we simply happened to get an uneven batch? We know that when flipping a coin ten times, we don’t necessarily get exactly five heads and five tails. The same is true in the null hypothesis scenario for our L experiment. Enter pvalues. Using sophisticated statistical analyses, we can reduce this complicated question to a single number: the pvalue. This provides us with the probability that our result, which appears to support our hypothesis, is simply due to chance. For example, using certain assumptions, we can calculate what the pvalue is for the above results: 0.16, or 16 percent. What this means is that there is about a one in six chance that this result is simply due to sampling variation (getting a few more L lefthanders and a few less L righthanded carriers than we expected, if they are of equal frequency). 
pages: 205 words: 20,452 
Data Mining in Time Series Databases by Mark Last, Abraham Kandel, Horst Bunke Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
4chan, call centre, computer vision, discrete time, information retrieval, iterative process, NPcomplete, pvalue, pattern recognition, random walk, sensor fusion, speech recognition, web application Results of the CD hypothesis testing on the ‘Manufacturing’ database Month 1 2 3 4 5 6 CD XP eMK−1,K eMK−1 ,K−1 d H(95%) 1 − pvalue 1 − pvalue — 14.10% 11.70% 10.60% 11.90% 6.60% — 12.10% 10.40% 9.10% 10.10% 8.90% — 2.00% 1.30% 1.50% 1.80% 2.30% — 4.80% 3.40% 2.90% 2.80% 2.30% — 58.50% 54.40% 68.60% 78.90% 95.00% — 78.30% 98.80% 76.50% 100.00% 63.10% 99% 100% 100% 78% 1 – p value 60% 95% 76% 80% 79% 63% 69% 58% 54% 40% 20% 0% 2 3 4 month 5 6 CD XP Fig. 2. Summary of implementing the change detection methodology on ‘Manufacturing’ database (1 − pvalue). Table 10. XP conﬁdence level of all independent and dependent variables in ‘Manufacturing’ database (1 − pvalue). Domain Month 2 Month 3 Month 4 Month 5 Month 6 CAT GRP MRKT Code Duration Time to operate Quantity Customer GRP 18 100% 100% 100% 100% 100% 19 100% 100% 99.8% 99.9% 100% 19 100% 100% 100% 100% 100% 19 100% 100% 100% 100% 100% 15 100% 100% 100% 100% 100% 18 100% 100% 100% 100% 100% Change Detection in Classiﬁcation Models Induced from Time Series Data 119 According to the change detection methodology, during all six consecutive months there was no signiﬁcant change in the rules describing the relationships between the candidate and the target variables (which is our main interest). … CD Change XP introduced eM K−1,K eM K−1,K−1 d H(95%) 1 − pvalue 1 − pvalue No No Yes No No Yes No — 20.30% 32.80% 26.60% 26.30% 18.80% 22.10% — 24.00% 19.80% 24.60% 26.40% 27.20% 22.00% — 3.70% 13.00% 2.00% 0.10% 8.40% 0.10% — 4.30% 3.20% 2.80% 2.50% 2.20% 2.00% — 91.90% 100.00% 88.20% 52.60% 100.00% 53.40% — 92.50% 100.00% 100.00% 99.90% 100.00% 52.80% Change Detection in Classiﬁcation Models Induced from Time Series Data Table 7. 115 G. Zeira, O. Maimon, M. Last and L. Rokach Confidence level 116 93% 100% 100% 100% 100% 99.8% 100 % 100% 84 % 80% 76 % 60% 53 % 53 % 53 % 40% 20% 0% 2 3 4 5 6 7 period CD XP Fig. 1. Summary of implementing the change detection methodology on an artiﬁcially generated time series database (1 − pvalue). Table 8. Inﬂuence of discarding the detected change (Illustration). … Outcomes of XP by validating the sixth month on the ﬁfth and the ﬁrst month in ‘Manufacturing’ database (pvalue). Metric XP (1 − pvalue) CAT GRP MRKT Code Duration Time to Operate Quantity Customer GRP Target 18 100% 19 100% 19 100% 19 100% 15 100% 18 100% 2 98.4% month 1 validated by month 6 100% 100% 100% 100% 100% 100% 100% months 1 to 5 validated by 100% 100% 100% 100% 100% 100% 63.1% domain month 5 validated by month 6 month 6 120 G. Zeira, O. Maimon, M. Last and L. Rokach 95.0% 88.5% 90.0% 85.0% 80.0% 76.1% 75.0% 70.0% months 1 to 5 validated by month 6 month 1 validated by month 6 month 5 validated by month 6 Fig. 3. CD conﬁdence level (1 − pvalue) outcomes of validating the sixth month on the ﬁfth and the ﬁrst month in ‘Manufacturing’ database. 
pages: 204 words: 58,565 
Keeping Up With the Quants: Your Guide to Understanding and Using Analytics by Thomas H. Davenport, Jinho Kim Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
BlackScholes formula, business intelligence, business process, call centre, computer age, correlation coefficient, correlation does not imply causation, Credit Default Swap, en.wikipedia.org, feminist movement, Florence Nightingale: pie chart, forensic accounting, global supply chain, Hans Rosling, hypertext link, invention of the telescope, inventory management, Jeff Bezos, margin call, Moneyball by Michael Lewis explains big data, Netflix Prize, pvalue, performance metric, publish or perish, quantitative hedge fund, random walk, Renaissance Technologies, Robert Shiller, Robert Shiller, selfdriving car, sentiment analysis, six sigma, Skype, statistical model, supplychain management, text mining, the scientific method Rare or unusual data (often represented by a pvalue below a specified threshold) is an indication that H0 is false, which constitutes a statistically significant result and support of the alternative hypothesis. Independent variable: A variable whose value is known and used to help predict or explain a dependent variable. For example, if you wish to predict the quality of a vintage wine using various predictors (average growing season temperature, harvest rainfall, winter rainfall, and the age of the vintage), the various predictors would serve as independent variables. Alternative names are explanatory variable, predictor variable, and regressor. pvalue: When performing a hypothesis test, the pvalue gives the probability of data occurrence under the assumption that H0 is true. Small pvalues are an indication of rare or unusual data from H0, which in turn provides support that H0 is actually false (and thus support of the alternative hypothesis). … A value of 5 percent signifies that we need data that occurs less than 5 percent of the time from H0 (if H0 were indeed true) for us to doubt H0 and reject it as being true. In practice, this is often assessed by calculating a pvalue; pvalues less than alpha are indication that H0 is rejected and the alternative supported. ttest or student’s ttest: A test statistic that tests whether the means of two groups are equal, or whether the mean of one group has a specified value. Type I error or α error: This error occurs when the null hypothesis is true, but it is rejected. In traditional hypothesis testing, one rejects the null hypothesis if the pvalue is smaller than the significance level α. So, the probability of incorrectly rejecting a true null hypothesis equals α and thus this error is also called α error. a. For the descriptions in this section, we’ve referred to the pertinent definitions in Wikipedia, Heinz Kohler’s Statistics for Business and Economics (2002), and Dell’s Analytics Cheat Sheet (2012, Tables 6 and 8) … This response would not only have been reassuring to the wife but persuasive to her husband as well. In statistical hypothesis testing, the probability of 0.003 calculated above is called the pvalue—the probability of obtaining a test statistic (e.g., Zvalue of 2.75 in this case) at least as extreme as the one that was actually observed (a pregnancy that would last at least ten months and five days), assuming that the null hypothesis is true. In this example the null hypothesis (H0) is “This baby is my husband’s.” In traditional hypothesis testing, one rejects the null hypothesis if the pvalue is smaller than the significance level. In this case a pvalue of 0.003 would result in the rejection of the null hypothesis even at the 1 percent significance level—typically the lowest level anyone uses. Normally, then, we reject the null hypothesis that this baby is the San Diego Reader’s husband’s baby. 
pages: 561 words: 120,899 
The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant From Two Centuries of Controversy by Sharon Bertsch McGrayne Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
bioinformatics, British Empire, Claude Shannon: information theory, Daniel Kahneman / Amos Tversky, double helix, Edmond Halley, Fellow of the Royal Society, full text search, Henri Poincaré, Isaac Newton, John Nash: game theory, John von Neumann, linear programming, meta analysis, metaanalysis, Nate Silver, pvalue, placebo effect, prediction markets, RAND corporation, recommendation engine, Renaissance Technologies, Richard Feynman, Richard Feynman, Richard Feynman: Challenger Oring, Ronald Reagan, speech recognition, statistical model, stochastic process, Thomas Kuhn: the structure of scientific revolutions, traveling salesman, Turing machine, Turing test, uranium enrichment, Yom Kippur War Newton, as Jeffreys pointed out, derived his law of gravity 100 years before Laplace proved it by discovering Jupiter’s and Saturn’s 877year cycle: “There has not been a single date in the history of the law of gravitation when a modern significance test would not have rejected all laws [about gravitation] and left us with no law.”50 Bayes, on the other hand, “makes it possible to modify a law that has stood criticism for centuries without the need to suppose that its originator and his followers were useless blunderers.”51 Jeffreys concluded that pvalues fundamentally distorted science. Frequentists, he complained, “appear to regard observations as a basis for possibly rejecting hypotheses, but in no case for supporting them.”52 But odds are that at least some of the hypotheses Fisher rejected were worth investigating or were actually true. A frequentist who tests a precise hypothesis and obtains a pvalue of .04, for example, can consider that significant evidence against the hypothesis. But Bayesians say that even with a .01 pvalue (which many frequentists would see as extremely strong evidence against a hypothesis) the odds in its favor are still 1 to 9 or 10—“not earthshaking,” says Jim Berger, a Bayesian theorist at Duke University. Pvalues still irritate Bayesians. Steven N. Goodman, a distinguished Bayesian biostatistician at Johns Hopkins Medical School, complained in 1999, “The pvalue is almost nothing sensible you can think of. … As the statistician Dennis Lindley wrote, Jeffreys “would admit a probability for the existence of the greenhouse effect, whereas most [frequentist] statisticians would not and would confine their probabilities to the data on CO2, ozone, heights of the oceans, etc.”49 Jeffreys was particularly annoyed by Fisher’s measures of uncertainty, his “pvalues” and significance levels. The pvalue was a probability statement about data, given the hypothesis under consideration. Fisher had developed them for dealing with masses of agricultural data; he needed some way to determine which should be trashed, which filed away, and which followed up on immediately. Comparing two hypotheses, he could reject the chaff and save the wheat. Technically, pvalues let laboratory workers state that their experimental outcome offered statistically significant evidence against a hypothesis if the outcome (or a more extreme outcome) had only a small probability (under the hypothesis) of having occurred by chance alone. … Jahn reported that the random event generator produced 18,471 more examples (0.018%) of human influence on his sensitive microelectronic equipment than could be expected with chance alone. Even with a pvalue as small as 0.00015, the frequentist would reject the hypothesis (and conclude in favor of psychokinetic powers) while the same evidence convinces a Bayesian that the hypothesis against spiritualism is almost certainly true. Six years later, Jimmie Savage, Harold Lindman, and Ward Edwards at the University of Michigan showed that results using Bayes and the frequentist’s pvalues could differ by significant amounts even with everydaysized data samples; for instance, a Bayesian with any sensible prior and a sample of only 20 would get an answer ten times or more larger than the pvalue. Lindley ran afoul of Fisher’s temper when he reviewed Fisher’s third book and found “what I thought was a very basic, serious error in it: Namely, that [Fisher’s] fiducial probability doesn’t obey the rules of probability. 
pages: 321 words: 97,661 
How to Read a Paper: The Basics of EvidenceBased Medicine by Trisha Greenhalgh Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
call centre, complexity theory, conceptual framework, correlation coefficient, correlation does not imply causation, deskilling, knowledge worker, meta analysis, metaanalysis, microbiome, New Journalism, pvalue, personalized medicine, placebo effect, randomized controlled trial, the scientific method In order to demonstrate that A has caused B (rather than B causing A, or A and B both being caused by C), you need more than a correlation coefficient. Box 5.1 gives some criteria, originally developed by Sir Austin Bradford Hill [14], which should be met before assuming causality. Probability and confidence Have ‘pvalues’ been calculated and interpreted appropriately? One of the first values a student of statistics learns to calculate is the pvalue—that is the probability that any particular outcome would have arisen by chance. Standard scientific practice, which is essentially arbitrary, usually deems a pvalue of less than one in twenty (expressed as p < 0.05, and equivalent to a betting odds of twenty to one) as ‘statistically significant’, and a pvalue of less than one in a hundred (p < 0.01) as ‘statistically highly significant’. By definition, then, one chance association in twenty (this must be around one major published result per journal issue) will appear to be significant when it isn't, and one in a hundred will appear highly significant when it is really what my children call a ‘fluke’. … A result in the statistically significant range (p < 0.05 or p < 0.01 depending on what you have chosen as the cutoff) suggests that the authors should reject the null hypothesis (i.e. the hypothesis that there is no real difference between two groups). But as I have argued earlier (see section ‘Were preliminary statistical questions addressed?’), a pvalue in the nonsignificant range tells you that either there is no difference between the groups or there were too few participants to demonstrate such a difference if it existed. It does not tell you which. The pvalue has a further limitation. Guyatt and colleagues conclude thus, in the first article of their ‘Basic Statistics for Clinicians’ series on hypothesis testing using pvalues. Why use a single cutoff point [for statistical significance] when the choice of such a point is arbitrary? Why make the question of whether a treatment is effective a dichotomy (a yesno decision) when it would be more appropriate to view it as a continuum? … If they are not, a paired t or other paired test should be used instead. 3. Only a single pair of measurements should be made on each participant, as the measurements made on successive participants need to be statistically independent of each other if we are to end up with unbiased estimates of the population parameters of interest. 4. Every rvalue should be accompanied by a pvalue, which expresses how likely an association of this strength would be to have arisen by chance (see section ‘Have ‘pvalues’ been calculated and interpreted appropriately?’), or a confidence interval, which expresses the range within which the ‘true’ Rvalue is likely to lie (see section ‘Have confidence intervals been calculated, and do the authors' conclusions reflect them?’). (Note that lower case ‘r’ represents the correlation coefficient of the sample, whereas upper case ‘R’ represents the correlation coefficient of the entire population.) 
pages: 339 words: 112,979 
Unweaving the Rainbow by Richard Dawkins Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
Any sufficiently advanced technology is indistinguishable from magic, Arthur Eddington, complexity theory, correlation coefficient, David Attenborough, discovery of DNA, double helix, Douglas Engelbart, I think there is a world market for maybe five computers, Isaac Newton, Jaron Lanier, Mahatma Gandhi, music of the spheres, Necker cube, pvalue, phenotype, Ralph Waldo Emerson, Richard Feynman, Richard Feynman, Ronald Reagan, Solar eclipse in 1919, Steven Pinker, Zipf's Law When we say that an effect is statistically significant, we must always specify a socalled pvalue. This is the probability that a purely random process would have generated a result at least as impressive as the actual result. A pvalue of 2 in 10,000 is pretty impressive, but it is still possible that there is no genuine pattern there. The beauty of doing a proper statistical test is that we know how probable it is that there is no genuine pattern there. Conventionally, scientists allow themselves to be swayed by pvalues of 1 in 100, or even as high as 1 in 20: fair less impressive than 2 in 10,000. What pvalue you accept depends upon how important the result is, and upon what decisions might follow from it. If all you are trying to decide is whether it is worth repeating the experiment with a larger sample, a pvalue of 0.05, or 1 in 20, is quite acceptable. … Even though there is a 1 in 20 chance that your interesting result would have happened anyway by chance, not much is at stake: the error is not a costly one. If the decision is a life and death matter, as in some medical research, a much lower pvalue than 1 in 20 should be sought. The same is true of experiments that purport to show highly controversial results, such as telepathy or 'paranormal' effects. As we briefly saw in connection with DNA fingerprinting, statisticians distinguish false positive from false negative errors, sometimes called type 1 and type 2 errors respectively. A type 2 error, or false negative, is a failure to detect an effect when there really is one. A type 1 error, or false positive, is the opposite: concluding that there really is something going on when actually there is nothing but randomness. The pvalue is the measure of the probability that you have made a type 1 error. Statistical judgement means steering a middle course between the two kinds of error. … Birds may be programmed to learn to adjust their policy as a result of their statistical experience. Whether they learn or not, successfully hunting animals must usually behave as if they are good statisticians. (I hope it is not necessary, by the way, to plod through the usual disclaimer: No, no, the birds aren't consciously working it out with calculator and probability tables. They are behaving as if they were calculating pvalues. They are no more aware of what a pvalue means than you are aware of the equation for a parabolic trajectory when you catch a cricket ball or baseball in the outfield.) Angler fish take advantage of the gullibility of little fish such as gobies. But that is an unfairly valueladen way of putting it. It would be better not to speak of gullibility and say that they exploit the inevitable difficulty the little fish have in steering between type 1 and type 2 errors. 
pages: 227 words: 62,177 
Numbers Rule Your World: The Hidden Influence of Probability and Statistics on Everything You Do by Kaiser Fung Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
American Society of Civil Engineers: Report Card, Andrew Wiles, Bernie Madoff, Black Swan, call centre, correlation does not imply causation, crosssubsidies, Daniel Kahneman / Amos Tversky, edge city, Emanuel Derman, facts on the ground, Gary Taubes, John Snow's cholera map, moral hazard, pvalue, pattern recognition, profit motive, Report Card for America’s Infrastructure, statistical model, the scientific method, traveling salesman The minute probability he computed, one in a quindecillion, is technically known as the pvalue and signifies how unlikely the situation was. The smaller the pvalue, the more impossible the situation, and the greater its power to refute the nofraud scenario. Then, statisticians say, the result has statistical significance. Note that this is a matter of magnitude, rather than direction. If the pvalue were 20 percent, then there would be a oneinfive chance of seeing at least 200 insider wins in seven years despite absence of fraud, and then Rosenthal would not have sufficient evidence to overturn the fairlottery hypothesis. Statisticians set a minimum acceptable standard of evidence, which is a pvalue of 1 percent or 5 percent. This practice originated with Sir Ronald Fisher, one of the giants of statistical thinking. For a more formal treatment of pvalues and statistical significance, look up the topics of hypothesis testing and confidence intervals in a statistics textbook. … . ~###~ In Minnesota, an ambitious experiment was organized to measure how turning off ramp meters on the highway entrances would affect the state of congestion. From the viewpoint of statistical testing, the doubters led by Senator Day wanted to know, if ramp metering was useless, what was the likelihood that the average trip time would rise by 22 percent (the improvement claimed by engineers who run the program) after the meters were shut off? Because this likelihood, or pvalue, was small, the consultants who analyzed the experiment concluded that the favorite tool of the traffic engineers was indeed effective at reducing congestion. Since statisticians do not believe in miracles, they avoided the alternative path, which would assert that a rare event—rather than the shutting off of ramp meters—could have produced the deterioration in travel time during the experiment. … See also False negatives; False positives confessions elicited by, 118, 120–21, 125–27, 130 countermeasures, 114, 122 examiner characteristics and role, 113–14 the legal system on, 117–18 major problems with, 129–30 in nationalsecurity screening, 96–97, 118, 121–24, 127–30, 175–76 PCASS, 118, 121–24, 127–30, 131, 132, 175 popularity of, 116–18 screening vs. targeted investigation, 123–24 Pre–post analysis, 158–59 Predictably Irrational (Ariely), 158 Prediction of rare events, 124 PulseNet, 31, 41 Pvalue, 179, 180 Quetelet, Adolphe, 2–3, 4 Queuing theory, 157–58 Quindecillion, 137, 144, 177 Racial/minority groups credit scores and, 52, 54 test fairness and, 64, 65, 70, 72–82, 94, 168–70, 180 Ramp metering, 13–15, 16, 19, 20–24, 157, 158–59, 180–81 Randomization, 170 Rauch, Ernst, 87 Red State, Blue State, Rich State, Poor State (Gelman), 168 Reliability, 10, 12, 14, 19 Riddick, Steve, 105 Riis, Bjarne, 103, 105, 110 Risk Management Solutions, 87 Risk pools, 86–87, 89–94, 168, 171 Rodriguez, Alex, 114 Rodriguez, Ivan, 114 Rolfs, Robert, 36 Rooney, J. 
pages: 322 words: 107,576 
Bad Science by Ben Goldacre Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
Asperger Syndrome, correlation does not imply causation, experimental subject, Ignaz Semmelweis: hand washing, John Snow's cholera map, Louis Pasteur, meta analysis, metaanalysis, offshore financial centre, pvalue, placebo effect, Richard Feynman, Richard Feynman, risk tolerance, Ronald Reagan, the scientific method, urban planning In generating his obligatory, spurious, Meadowesque figure—which this time was ‘one in 342 million’—the prosecution’s statistician made a simple, rudimentary mathematical error. He combined individual statistical tests by multiplying pvalues, the mathematical description of chance, or statistical significance. This bit’s for the hardcore science nerds, and will be edited out by the publisher, but I intend to write it anyway: you do not just multiply pvalues together, you weave them with a clever tool, like maybe ‘Fisher’s method for combination of independent pvalues’. If you multiply pvalues together, then harmless and probable incidents rapidly appear vanishingly unlikely. Let’s say you worked in twenty hospitals, each with a harmless incident pattern: say p=0.5. If you multiply those harmless pvalues, of entirely chance findings, you end up with a final pvalue of 0.5 to the power of twenty, which is p < 0.000001, which is extremely, very, highly statistically significant. … Presented with a small increase like this, you have to think: is it statistically significant? I did the maths, and the answer is yes, it is, in that you get a pvalue of less than 0.05. What does ‘statistically significant’ mean? It’s just a way of expressing the likelihood that the result you got was attributable merely to chance. Sometimes you might throw ‘heads’ five times in a row, with a completely normal coin, especially if you kept tossing it for long enough. Imagine a jar of 980 blue marbles, and twenty red ones, all mixed up: every now and then—albeit rarely—picking blindfolded, you might pull out three red ones in a row, just by chance. The standard cutoff point for statistical significance is a pvalue of 0.05, which is just another way of saying, ‘If I did this experiment a hundred times, I’d expect a spurious positive result on five occasions, just by chance.’ … Will our increase in cocaine use, already down from ‘doubled’ to ‘35.7 per cent’, even survive? No. Because there is a final problem with this data: there is so much of it to choose from. There are dozens of data points in the report: on solvents, cigarettes, ketamine, cannabis, and so on. It is standard practice in research that we only accept a finding as significant if it has a pvalue of 0.05 or less. But as we said, a pvalue of 0.05 means that for every hundred comparisons you do, five will be positive by chance alone. From this report you could have done dozens of comparisons, and some of them would indeed have shown increases in usage—but by chance alone, and the cocaine figure could be one of those. If you roll a pair of dice often enough, you will get a double six three times in a row on many occasions. 
pages: 62 words: 14,996 
SciPy and NumPy by Eli Bressert Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
import numpy as np from scipy import stats # Generating a normal distribution sample # with 100 elements sample = np.random.randn(100) # normaltest tests the null hypothesis. out = stats.normaltest(sample) print('normaltest output') print('Zscore = ' + str(out[0])) print('Pvalue = ' + str(out[1])) # kstest is the KolmogorovSmirnov test for goodness of fit. # Here its sample is being tested against the normal distribution. # D is the KS statistic and the closer it is to 0 the better. out = stats.kstest(sample, 'norm') print('\nkstest output for the Normal distribution') print('D = ' + str(out[0])) print('Pvalue = ' + str(out[1])) # Similarly, this can be easily tested against other distributions, # like the Wald distribution. out = stats.kstest(sample, 'wald') print('\nkstest output for the Wald distribution') print('D = ' + str(out[0])) print('Pvalue = ' + str(out[1])) Researchers commonly use descriptive functions for statistics. Some descriptive functions that are available in the stats package include the geometric mean (gmean), the skewness of a sample (skew), and the frequency of values in a sample (itemfreq). 
pages: 119 words: 10,356 
Topics in Market Microstructure by Ilija I. Zovko Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
Brownian motion, continuous double auction, correlation coefficient, financial intermediation, Gini coefficient, market design, market friction, market microstructure, Murray GellMann, pvalue, quantitative trading / quantitative ﬁnance, random walk, stochastic process, stochastic volatility, transaction costs Significant slope coefficients show that if two institutions’ strategies were correlated in one month, they are likely to be correlated in the next one as well. The table does not contain the offbook market because we cannot reconstruct institution codes for the offbook market in the same way as we can for the onbook market. The ± values are the standard error of the coefficient estimate and the values in the parenthesis are the standard pvalues. Onbook market Stock Intercept Slope R2 AAL 0.010 ± 0.004 (0.02) 0.25 ± 0.04 (0.00) 0.061 AZN 0.01 ± 0.003 (0.00) 0.14 ± 0.03 (0.00) 0.019 LLOY 0.003 ± 0.003 (0.28) 0.23 ± 0.02 (0.00) 0.053 VOD 0.008 ± 0.001 (0.00) 0.17 ± 0.01 (0.00) 0.029 does not work for institutions that do not trade frequently5 . Therefore, the results reported in this section concern only the onbook market and are based mostly on more active institutions. … Some of the explanatory variables, such as signed trades and signed volume, are strongly correlated. This may lead to instabilities in coefficient estimates for those variables and we need to keep this in mind when interpreting results. The results for the on and offbook markets, as well as for the daily and hourly returns are collected in table II. Apart from the value of the coefficient, its error and pvalue, we list also Rs2 and Rp2 . Rs2 is the value of Rsquare of a regression with only the selected variable, and no others, included. It is equal to the square root of the absolute value of the correlation between the variable and the 86 Rs2 Rp2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 R2 = 0.00 offbook Error 0.006 (0.0) 0.008 (0.0) 0.009 (0.0) 0.008 (0.0) Coef. 0.075 0.019 0.031 0.033 onbook market Error Rs2 Rp2 0.05 (0.0) 0.24 0.06 0.04 (0.0) 0.04 0.00 0.04 (0.0) 0.19 0.01 0.05 (0.0) 0.06 0.01 R2 = 0.28 Coef. 3.03 0.54 1.23 1.46 Hourly δV (signed volume) δE (entropy) δN (no. firms) δT (no. signed trades) Overall Daily δV (signed volume) δE (entropy) δN (no. firms) δT (no. signed trades) Overall Coef. 0.40 0.20 0.17 0.00 onbook market Error Rs2 Rp2 0.01 (0.0) 0.21 0.10 0.01 (0.0) 0.12 0.03 0.01 (0.0) 0.15 0.02 0.01 (0.7) 0.07 0.00 R2 = 0.32 Coef. 0.104 0.039 0.050 0.008 offbook Error 0.007 (0.0) 0.008 (0.0) 0.009 (0.0) 0.008 (0.3) Rs2 Rp2 0.05 0.02 0.03 0.00 0.03 0.00 0.02 0.00 R2 = 0.07 CHAPTER 5. … . 0.104 0.039 0.050 0.008 offbook Error 0.007 (0.0) 0.008 (0.0) 0.009 (0.0) 0.008 (0.3) Rs2 Rp2 0.05 0.02 0.03 0.00 0.03 0.00 0.02 0.00 R2 = 0.07 CHAPTER 5. MARKET IMBALANCES AND STOCK RET.: HETEROGENEITY OF ORDER SIZES AT THE LSE Table 5.2: Regression results showing the significance of the market imbalance variables on price returns. Columns from left to right are estimated coefficient, its error and in the parenthesis the pvalue of the test that the coefficient is zero assuming normal statistics; Rs2 is the value of R2 in a regression where only the selected variable is present in the regression. It expresses how much the variable on its own (solo) explains price returns. Final column Rp2 is the partial R2 of the selected variable. It expresses how much the variable explains price returns above the other three variables. 

The Intelligent Asset Allocator: How to Build Your Portfolio to Maximize Returns and Minimize Risk by William J. Bernstein Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
asset allocation, backtesting, capital asset pricing model, computer age, correlation coefficient, diversification, diversified portfolio, Eugene Fama: efficient market hypothesis, fixed income, index arbitrage, index fund, Long Term Capital Management, pvalue, passive investing, prediction markets, random walk, Richard Thaler, risk tolerance, riskadjusted returns, risk/return, South Sea Bubble, the scientific method, time value of money, transaction costs, Vanguard fund, Yogi Berra, zerocoupon bond In which case he is probably not skilled, since it would not be unusual for 1 of 30 individuals to experience a 1.1% random event.On the other hand, if his performance measured is out of sample—that is, we had picked him alone among his teammates—then he probably is skilled, since we would have only one chance at a 1.1% occurrence in a random batting world. An only slightly more complex formulation is used to evaluate money managers. One has to be extremely careful to distinguish outofsample from insample performance. One should not be surprised if one picks out the bestperforming manager out of 500 and finds that his p value is .001. However, if one identifies him ahead of time, and then his performance p value is .001 after the fact, then he probably is skilled. 87 88 The Intelligent Asset Allocator Table 61. Subsequent Performance of Top Performing Funds, 1970–1998 Top 30 funds 1970–1974 All funds S&P 500 Top 30 funds 1975–1979 All funds S&P 500 Top 30 funds 1980–1984 All funds S&P 500 Top 30 funds 1985–1989 All funds S&P 500 Top 30 funds 1990–1994 All funds S&P 500 SOURCE: Return 1970–1974 Return 1975–1998 0.78% ⫺6.12% ⫺2.35% 16.05% 16.38% 17.04% Return 1975–1979 Return 1980–1998 35.70% 20.44% 14.76% 15.78% 15.28% 17.67% Return 1980–1984 Return 1985–1998 22.51% 14.83% 14.76% 16.01% 15.59% 18.76% Return 1985–1989 Return 1990–1998 22.08% 16.40% 20.41% 16.24% 15.28% 17.81% Return 1990–1994 Return 1995–1998 18.94% 9.39% 8.69% 21.28% 24.60% 32.18% DFA/Micropal/Standard and Poor’s. … In other words, in a random world an annual 0.020/兹10 SD of 20 points translates into an SD of 6.3 points over 10 years. The difference between the batter’s performance and the mean is .020, and dividing that by the SE of .0063 gives a “z value” of 3.17. Since we are considering 10 years, performance, there are 9 “degrees of freedom.” The z value and degrees of freedom are fed into a “t distribution function” on our spreadsheet, and out pops a p value of .011. In other words, in a “random batting” world, there is a 1.1% chance of a given batter averaging .280 over 10 seasons. Whether or not we consider such a batter skilled also depends on whether we are observing him “in sample” or “out of sample.” In sample means that we picked him out of a large number of batters—say, all of his teammates—after the fact. In which case he is probably not skilled, since it would not be unusual for 1 of 30 individuals to experience a 1.1% random event.On the other hand, if his performance measured is out of sample—that is, we had picked him alone among his teammates—then he probably is skilled, since we would have only one chance at a 1.1% occurrence in a random batting world. … The alpha is the difference between the fund’s performance and that of the regressiondetermined benchmark and a measure of how well the manager has performed. It is expressed the same way as return, in percent per year, and can be positive or negative. For example, if a manager has an alpha of ⫺4% per year this means that the manager has underperformed the regressiondetermined benchmark by 4% 90 The Intelligent Asset Allocator annually. Oakmark’s alpha for the first 29 months is truly spectacular, and quite statistically significant, with a p value of .0004. This means that there was less than a 1in2000 possibility that the fund’s superb performance in the first 29 months could have been due to chance. Unfortunately, its performance in the last 29month period was equally impressive, but in the wrong direction. My interpretation of the above data is that Mr. Sanborn is modestly skilled. “Modestly skilled” is not at all derogatory in this context, since 99% of fund managers demonstrate no evidence of skill whatsoever. 
pages: 197 words: 35,256 
NumPy Cookbook by Ivan Idris Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
business intelligence, cloud computing, computer vision, Debian, en.wikipedia.org, Eratosthenes, mandelbrot fractal, pvalue, sorting algorithm, statistical model, transaction costs, web application Again, we will calculate the log returns of the close price of this stock, and use that as an input for the normality test function. This function returns a tuple containing a second element—a pvalue between zero and one. The complete code for this tutorial is as follows: import datetime import numpy from matplotlib import finance from statsmodels.stats.adnorm import normal_ad import sys #1. Download price data # 2011 to 2012 start = datetime.datetime(2011, 01, 01) end = datetime.datetime(2012, 01, 01) print "Retrieving data for", sys.argv[1] quotes = finance.quotes_historical_yahoo(sys.argv[1], start, end, asobject=True) close = numpy.array(quotes.close).astype(numpy.float) print close.shape print normal_ad(numpy.diff(numpy.log(close))) The following shows the output of the script with pvalue of 0.13: Retrieving data for AAPL (252,) (0.57103805516803163, 0.13725944999430437) How it works... … Download price data # 2011 to 2012 start = datetime.datetime(2011, 01, 01) end = datetime.datetime(2012, 01, 01) print "Retrieving data for", sys.argv[1] quotes = finance.quotes_historical_yahoo(sys.argv[1], start, end, asobject=True) close = numpy.array(quotes.close).astype(numpy.float) print close.shape print normal_ad(numpy.diff(numpy.log(close))) The following shows the output of the script with pvalue of 0.13: Retrieving data for AAPL (252,) (0.57103805516803163, 0.13725944999430437) How it works... This recipe demonstrated the Anderson Darling statistical test for normality, as found in scikitsstatsmodels. We used the stock price data, which does not have a normal distribution, as input. For the data, we got a pvalue of 0.13. Since probabilities range between zero and one, this confirms our hypothesis. Installing scikitsimage scikits image is a toolkit for image processing, which requires PIL, SciPy, Cython, and NumPy. There are Windows installers available for it. It is part of Enthought Python Distribution, as well as the Python(x, y) distribution. How to do it... As usual, we can install using either of the following two commands: pip install U scikitsimage easy_install U scikitsimage Again, you might need to run these commands as root. 
pages: 755 words: 121,290 
Statistics hacks by Bruce Frey Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
Berlin Wall, correlation coefficient, Daniel Kahneman / Amos Tversky, distributed generation, en.wikipedia.org, feminist movement, game design, Hacker Ethic, index card, Milgram experiment, pvalue, placemaking, RFID, Search for Extraterrestrial Intelligence, SETI@home, Silicon Valley, statistical model Power In social science research, a statistical analysis frequently determines whether a certain value observed in a sample is likely to have occurred by chance. This process is called a test of significance. Tests of significance produce a pvalue (probability value), which is the probability that the sample value could have been drawn from a particular population of interest. The lower the pvalue, the more confident we are in our beliefs that we have achieved statistical significance and that our data reveals a relationship that exists not only in our sample but also in the whole population represented by that sample. Usually, a predetermined level of significance is chosen as a standard for what counts. If the eventual pvalue is equal to or lower than that predetermined level of significance, then the researcher has achieved a level of significance. Statistical analyses and tests of significance are not limited to identifying relationships among variables, but the most common analyses (t tests, F tests, chisquares, correlation coefficients, regression equations, etc.) usually serve this purpose. … The power of a statistical test is the probability that, given that there is a relationship among variables in the population, the statistical analysis will result in the decision that a level of significance has been achieved. Notice this is a conditional probability. There must be a relationship in the population to find; otherwise, power has no meaning. Power is not the chance of finding a significant result; it is the chance of finding that relationship if it is there to find. The formula for power contains three components: Sample size The predetermined level of significance (pvalue) to beat (be less than) The effect size (the size of the relationship in the population) Conducting a Power Analysis Let's say we want to compare two different sample groups and see whether they are different enough that there is likely a real difference in the populations they represent. For example, suppose you want to know whether men or women sleep more. The design is fairly straightforward. … Using this option on the Tools menu, you can test the significance of the regression coefficient using an F test, a statistical test similar to a t test [Hack #17]. The results (a.k.a. the output) are shown in Tables 511 and 512. Let's see which of the variables best assist us in predicting whether a team will win the Super Bowl. Table Regression statistics Multiple R R square Observations 0.8483 0.7196 30 Table Regression equation VariableCoefficientsT statPvalue Intercept 0.784 1.010 0.323 Easy wins 0.119 4.274 0.000 Attendance 0.000 0.822 0.416 Hot dogs sold 0.000 1.043 0.308 Gatorade 0.013 2.457 0.022 Weight 0.001 0.580 0.567 Table 512 shows a coefficient (a weight) for each of the five variables that were entered into the equation to test how well each one predicts Super Bowl wins. For example, the coefficient associated with "Easy wins" is .119. 
pages: 836 words: 158,284 
The 4Hour Body: An Uncommon Guide to Rapid FatLoss, Incredible Sex, and Becoming Superhuman by Timothy Ferriss Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
23andMe, airport security, Albert Einstein, Black Swan, Buckminster Fuller, carbon footprint, cognitive dissonance, Columbine, correlation does not imply causation, Dean Kamen, game design, Gary Taubes, index card, Kevin Kelly, knowledge economy, life extension, Mahatma Gandhi, microbiome, pvalue, Parkinson's law, Paul Buchheit, placebo effect, Productivity paradox, publish or perish, Ralph Waldo Emerson, Ray Kurzweil, Richard Feynman, Richard Feynman, Silicon Valley, Silicon Valley startup, Skype, stem cell, Steve Jobs, Thorstein Veblen, wage slave, William of Occam Let the journals catch up later—you don’t have to wait. PValue: One Number to Understand Statistical thinking will one day be as necessary for effective citizenship as the ability to read and write. —H. G. Wells, who created national hysteria with his radio adaptation of his science fiction book The War of the Worlds British MD and quack buster Ben Goldacre, contributor of the next chapter, is well known for illustrating how people can be fooled by randomness. He uses the following example: If you go to a cocktail party, what’s the likelihood that two people in a group of 23 will share the same birthday? One in 100? One in 50? In fact, it’s one in two. Fifty percent. To become better at spotting randomness for what it is, it’s important to understand the concept of “pvalue,” which you’ll see in all good research studies. … To become better at spotting randomness for what it is, it’s important to understand the concept of “pvalue,” which you’ll see in all good research studies. It answers the question: how confident are we that this result wasn’t due to random chance? To demonstrate (or imply) causeandeffect, the gold standard for studies is a pvalue of less than 0.05 (p < 0.05), which means a less than 5% likelihood that the result can be attributed to chance. A pvalue of less than 0.05 is also what most scientists mean when they say something is “statistically significant.” An example makes this easy to understand. Let’s say you are a professional coin flipper, but you’re unethical. In hopes of dominating the coinflipping gambling circuit, you’ve engineered a quarter that should come up heads more often than a normal quarter. To test it, you flip it and a normal quarter 100 times, and the results seem clear: the “normal” quarter came up heads 50 times, and your designer quarter came up heads 60 times! … In other words, you better make sure that 20% holds up with at least 453 flips with each coin. In this case, 10 extra flips out of 100 doesn’t prove causeand effect at all. Three points to remember about pvalues and “statistical significance”: • Just because something seems miraculous doesn’t mean it is. People are fooled by randomness all the time, as in the birthday example. • The larger the difference between groups, the smaller the groups can be. Critics of small trials or selfexperimentation often miss this. If something appears to produce a 300% change, you don’t need that many people to show significance, assuming you’re controlling variables. • It is not kosher to combine pvalues from multiple experiments to make something more or less believable. That’s another trick of bad scientists and mistake of uninformed journalists. TOOLS AND TRICKS The Black Swan by Nassim Taleb (www.fourhourbody.com/blackswan) Taleb, also author of the bestseller Fooled by Randomness, is the reigning king when it comes to explaining how we fool ourselves and how we can limit the damage. 
pages: 1,065 words: 229,099 
Real World Haskell by Bryan O'Sullivan, John Goerzen, Donald Stewart, Donald Bruce Stewart Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
bash_history, database schema, Debian, distributed revision control, domainspecific language, en.wikipedia.org, Firefox, generalpurpose programming language, job automation, pvalue, Plutocrats, plutocrats, revision control, sorting algorithm, transfer pricing, type inference, web application, Y Combinator With this p_series function, parsing an array is simple:  file: ch16/JSONParsec.hs p_array :: CharParser () (JAry JValue) p_array = JAry <$> p_series '[' p_value ']' Dealing with a JSON object is hardly more complicated, requiring just a little additional effort to produce a name/value pair for each of the object’s fields:  file: ch16/JSONParsec.hs p_object :: CharParser () (JObj JValue) p_object = JObj <$> p_series '{' p_field '}' where p_field = (,) <$> (p_string <* char ':' <* spaces) <*> p_value Parsing an individual value is a matter of calling an existing parser, and then wrapping its result with the appropriate JValue constructor:  file: ch16/JSONParsec.hs p_value :: CharParser () JValue p_value = value <* spaces where value = JString <$> p_string <> JNumber <$> p_number <> JObject <$> p_object <> JArray <$> p_array <> JBool <$> p_bool <> JNull <$ string "null" <? … With this p_series function, parsing an array is simple:  file: ch16/JSONParsec.hs p_array :: CharParser () (JAry JValue) p_array = JAry <$> p_series '[' p_value ']' Dealing with a JSON object is hardly more complicated, requiring just a little additional effort to produce a name/value pair for each of the object’s fields:  file: ch16/JSONParsec.hs p_object :: CharParser () (JObj JValue) p_object = JObj <$> p_series '{' p_field '}' where p_field = (,) <$> (p_string <* char ':' <* spaces) <*> p_value Parsing an individual value is a matter of calling an existing parser, and then wrapping its result with the appropriate JValue constructor:  file: ch16/JSONParsec.hs p_value :: CharParser () JValue p_value = value <* spaces where value = JString <$> p_string <> JNumber <$> p_number <> JObject <$> p_object <> JArray <$> p_array <> JBool <$> p_bool <> JNull <$ string "null" <?> "JSON value" p_bool :: CharParser () Bool p_bool = True <$ string "true" <> False <$ string "false" The choice combinator allows us to represent this kind of ladderofalternatives as a list. It returns the result of the first parser to succeed:  file: ch16/JSONParsec.hs p_value_choice = value <* spaces where value = choice [ JString <$> p_string , JNumber <$> p_number , JObject <$> p_object , JArray <$> p_array , JBool <$> p_bool , JNull <$ string "null" ] <?> "JSON value" This leads us to the two most interesting parsers, for numbers and strings. We’ll deal with numbers first, since they’re simpler:  file: ch16/JSONParsec.hs p_number :: CharParser () Double p_number = do s < getInput case readSigned readFloat s of [(n, s')] > n <$ setInput s' _ > empty Our trick here is to take advantage of Haskell’s standard number parsing library functions, which are defined in the Numeric module. 

Debtor Nation: The History of America in Red Ink (Politics and Society in Modern America) by Louis Hyman Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
assetbacked security, bank run, barriers to entry, Bretton Woods, card file, central bank independence, computer age, corporate governance, credit crunch, declining real wages, deindustrialization, diversified portfolio, financial independence, financial innovation, Gini coefficient, Home mortgage interest deduction, housing crisis, income inequality, invisible hand, late fees, London Interbank Offered Rate, market fundamentalism, means of production, mortgage debt, mortgage tax deduction, pvalue, pattern recognition, profit maximization, profit motive, risk/return, Ronald Reagan, Silicon Valley, statistical model, technology bubble, the built environment, transaction costs, union organizing, white flight, women in the workforce, working poor ., 1940–1950,” box 4, Commissioner’s Correspondence and subject file, 1938–1958, RG 31 Records of the Federal Housing Administration, NARA, 6. 46. Kenneth Wells to Guy T.O. Holladay, June 24, 1953, folder “Minority Group Housing – Printed Material, Speeches, Field Letters, Etc., 1940–1950,” box 4, Commissioner’s Correspondence and subject file, 1938–1958, RG 31 Records of the Federal Housing Administration, NARA. 47. Pvalue = 0.0001. 48. Once again, race had a pvalue of > 0.586. The racial coefficient, moreover, dropped to only a little over $500. 49. Linear regression with mortgagehaving subpopulation for mortgage amount, race (P > 0.586) was not significant, and location (P > 0.006) was. NOTES TO CHAPTER 5 335 50. Pearson test for suburban dummy variable was (P > 0.42). 51. Linear regression of mortgage controlling for race (P > 0.269), location (P > 0.019), federal loan status (P > 0.003), and income (P > 0.000). … The most important statistical advances made since the late 1950s, for the purposes of this analysis, are the ability to adjust for the internal correlation of primary sampling units, logistic regression, and censored normal regressions—all of which are used in this chapter, especially the first two mentioned. In terms of questions, this chapter pays far greater attention to the intersections of race, class, and location than the original published survey, which was mostly a collection of bar graphs and averages. For the less technically inclined reader, explanations of NOTES TO CHAPTER 5 331 some of the statistical methods will be in the notes. For the more technically inclined reader, pvalues of relevant tests and regressions have generally been put in the notes. 3. William H. Whyte, “Budgetism: Opiate of the Middle Class,” Fortune (May 1956), 133, 136–37. 4. John Lebor, “Requirements for Profitable Credit Selling,” Credit Management Year Book 1959–1960 (New York: National Retail Dry Goods Association, 1959), 12. 5. Malcolm McNair, “Changing Retail Scene and What Lies Ahead,” National Retail Merchants Association Convention Speech, January 8, 1962, Historical Collections, BAK, 12. 6. … See Melvin Oliver’s Black Wealth, White Wealth: A New Perspective on Racial Inequality (New York, Routledge, 1995), for more on the importance of wealth inequality compared to income inequality today. As discussed later in the chapter, at the same income levels, African Americans always borrowed more frequently than whites and had lower wealth levels. 22. This was determined by running a series of regressions on debt and liquid assets, while controlling for location, mortgage status, marital status, and income. Pvalues for liquid assets in all models (P > 0.00). For whites, the model had R2 = 0.12 and for whites R2 = 0.41. 23. Odds ratio 5.42 with (P > 0.01) [1.44, 20.41]. 24. A linear regression with a suburban debtor subpopulation shows race (P > 0.248) and liquid assets (P > 0.241) to have no relationship to the amount borrowed unlike mortgage status (P > 0.000) and income (P > 0.013). 25. Suburban dummy variable for black households with (P > 0.02). 

Exploring Everyday Things with R and Ruby by Sau Sheong Chang Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
Alfred Russel Wallace, bioinformatics, business process, butterfly effect, cloud computing, Craig Reynolds: boids flock, Debian, Edward Lorenz: Chaos theory, Gini coefficient, income inequality, invisible hand, pvalue, price stability, Skype, statistical model, stem cell, Stephen Hawking, text mining, The Wealth of Nations by Adam Smith, We are the 99%, web application, wikimedia commons Without going in depth into the mathematics of this test (which would probably fill up a whole section, if not an entire chapter, on its own), let’s examine the initial population by assuming that the population is normally distributed and running the ShapiroWilk test on it: > data < read.table("money.csv", header=F, sep=",") > row < as.vector(as.matrix(data[1,])) > row [1] 56 79 66 74 96 54 91 59 70 95 65 82 64 80 63 68 69 69 72 89 64 53 87 49 [47] 68 66 80 89 57 73 72 82 76 58 57 78 94 73 83 52 75 71 52 57 76 59 63 ... > shapiro.test(row) ShapiroWilk normality test data: row W = 0.9755, pvalue = 0.3806 > As you can see, the pvalue is 0.3806, which (on a a scale of 0.0 to 1.0) is not small, and therefore the null hypothesis is not rejected. The null hypothesis is that of no change (i.e., the assumption that the distribution is normal). Strictly speaking, this doesn’t really prove that the distribution is normal, but a visual inspection of the first histogram chart in Figure 83 tells us that the likelihood of a normal distribution is high. 
pages: 416 words: 39,022 
Asset and Risk Management: Risk Oriented Finance by Louis Esch, Robert Kieffer, Thierry Lopez Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
asset allocation, Brownian motion, business continuity plan, business process, capital asset pricing model, computer age, corporate governance, discrete time, diversified portfolio, implied volatility, index fund, interest rate derivative, iterative process, P = NP, pvalue, random walk, risk/return, shareholder value, statistical model, stochastic process, transaction costs, value at risk, Wiener process, yield curve, zerocoupon bond In the same way, the parameter VaR ∗ is calculated simply, for a normal distribution, VaR q ∗ = −zq · σ (pt ). The values of zq are found in the normal distribution tables.7 A few examples of these values are given in Table 6.2. This shows that the expression Table 6.2 Normal distribution quantiles q 0.500 0.600 0.700 0.800 0.850 0.900 0.950 0.960 0.970 0.975 0.980 0.985 0.990 0.995 6 7 zq 0.0000 0.2533 0.5244 0.8416 1.0364 1.2816 1.6449 1.7507 1.8808 1.9600 2.0537 2.1701 2.3263 2.5758 Jorion P., Value At Risk, McGrawHill, 2001. Pearson E. S. and Hartley H. O., Biometrika Tables for Statisticians, Biometrika Trust, 1976, p. 118. Theory of VaR 189 Example If a security gives an average proﬁt of 100 over the reference period with a standard deviation of 80, we have E(pt ) = 100 and σ (pt ) = 80, which allows us to write: VaR 0.95 = 100 − (1.6449 × 80) = −31.6 VaR 0.975 = 100 − (1.9600 × 80) = −56.8 VaR 0.99 = 100 − (2.3263 × 80) = −86.1 The loss incurred by this security will only therefore exceed 31.6 (56.8 and 86.1 respectively) ﬁve times (2.5 times and once respectively) in 100 times. … Factor 3 Systematic risk of the portfolio Variable A Variable C Variable B Factor 2 Variable D Factor 1 Figure 11.5 Independent allocation Institutional Management: APT Applied to Investment Funds 289 APT – factor 3 Systematic risk of the portfolio Growth Not explained Value APT – factor 2 APT – factor 1 Figure 11.6 Joint allocation 11.4.2 Joint allocation: ‘value’ and ‘growth’ example As the systematic risk of the portfolio is expressed by its APT factorsensitivity vector, it can be broken down into the explicative variables ‘growth’ and ‘value’, representing the S&P Value and the S&P Growth (Figure 11.6). One cannot, however, be content with projecting the portfolio risk vector onto each of the variables. In fact, the ‘growth’ and ‘value’ variables are not necessarily independent statistically. They cannot therefore be represented by geometrically orthogonal variables. It is in fact essential to project the portfolio risk vector perpendicularly onto the space of the vectors of the variables. … ., Mathematics of Physics and Modern Engineering, McGrawHill, 1966. CHAPTER 6 Blattberg R. and Gonedes N., A comparison of stable and Student descriptions as statistical models for stock prices, Journal of Business, Vol. 47, 1974, pp. 244–80. Fama E., Behaviour of stock market prices, Journal of Business, Vol. 38, 1965, pp. 34–105. Johnson N. L. and Kotz S., Continuous Univariate Distribution, John Wiley & Sons, Inc, 1970. Jorion P., Value at Risk, McGrawHill, 2001. Pearson E. S. and Hartley H. O., Biometrika Tables for Students, Biometrika Trust, 1976. CHAPTER 7 Abramowitz M. and Stegun A., Handbook of Mathematical Functions, Dover, 1972. Chase Manhattan Bank NA, The Management of Financial Price Risk, Chase Manhattan Bank NA, 1995. 386 Bibliography Chase Manhattan Bank NA, Value at Risk, its Measurement and Uses, Chase Manhattan Bank NA, undated. 
pages: 506 words: 152,049 
The Extended Phenotype: The Long Reach of the Gene by Richard Dawkins Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
Alfred Russel Wallace, Douglas Hofstadter, Drosophila, epigenetics, Gödel, Escher, Bach, impulse control, Menlo Park, Necker cube, pvalue, phenotype, quantitative trading / quantitative ﬁnance, stem cell There is a whole family of ‘mixed strategies’ of the form ‘Dig with probability p, enter with probability 1 – p’, and only one of these is the ESS. I said that the two extremes were joined by a continuum. I meant that the stable population frequency of digging, p* (70 per cent or whatever it is), could be achieved by any of a large number of combinations of pure and mixed individual strategies. There might be a wide distribution of p values in individual nervous systems in the population, including some pure diggers and pure enterers. But, provided the total frequency of digging in the population is equal to the critical value p*, it would still be true that digging and entering were equally successful, and natural selection would not act to change the relative frequency of the two subroutines in the next generation. The population would be in an evolutionarily stable state. … Classify all individuals into those that entered with a probability less than 0.1, those that entered with a probability between 0.1 and 0.2, those with a probability between 0.2 and 0.3, between 0.3 and 0.4, 0.4 and 0.5, etc. Then compare the lifetime reproductive successes of wasps in the different classes. But supposing we did this, exactly what would the ESS theory predict? A hasty first thought is that those wasps with a p value close to the equilibrium p* should enjoy a higher success score than wasps with some other value of p: the graph of success against p should peak at an ‘optimum’ at p*. But p* is not really an optimum value, it is an evolutionarily stable value. The theory expects that, when p* is achieved in the population as a whole, digging and entering should be equally successful. At equilibrium, therefore, we expect no correlation between a wasp’s digging probability and her success. … The theory gives us no particular reason to expect that there should be any such variation. Indeed, the analogy with sex ratio theory just mentioned gives positive grounds for expecting that wasps should not vary in digging probability. In accordance with this, a statistical test on the actual data revealed no evidence of interindividual variation in digging tendency. Even if there were some individual variation, the method of comparing the success of individuals with different p values would have been a crude and insensitive one for comparing the success rates of digging and entering. This can be seen by an analogy. An agriculturalist wishes to compare the efficacy of two fertilizers, A and B. He takes ten fields and divides each of them into a large number of small plots. Each plot is treated, at random, with either A or B, and wheat is sown in all the plots of all the fields. 
pages: 447 words: 104,258 
Mathematics of the Financial Markets: Financial Instruments and Derivatives Modelling, Valuation and Risk Issues by Alain Ruttiens Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
algorithmic trading, asset allocation, assetbacked security, backtesting, banking crisis, Black Swan, BlackScholes formula, Brownian motion, capital asset pricing model, collateralized debt obligation, correlation coefficient, Credit Default Swap, credit default swaps / collateralized debt obligations, delta neutral, discounted cash flows, discrete time, diversification, fixed income, implied volatility, interest rate derivative, interest rate swap, margin call, market microstructure, martingale, pvalue, passive investing, quantitative trading / quantitative ﬁnance, random walk, risk/return, Sharpe ratio, short selling, statistical model, stochastic process, stochastic volatility, time value of money, transaction costs, value at risk, volatility smile, Wiener process, yield curve, zerocoupon bond In particular: at A: the portfolio is 100% invested in the riskfree rate; at B: 100% investment in an efficient portfolio of stocks; between A and B: mixed portfolio, invested at x% in the riskfree rate and (1 − x)% in the efficient portfolio of stocks; beyond B: leveraged portfolio, assuming the investor has borrowed money (at the rf rate) and has then invested >100% of his available resources in an efficient portfolio. For a given investor, characterized by some utility function U, representing his wellbeing, assuming his wealth as a portfolio P, if the portfolio return were certain (i.e., deterministic), we would have but, more realistically (even if simplified, in the spirit of this theory), if the portfolio P value is normally distributed in returns, with some rP and σP, where f is some function, often considered as a quadratic curve.4 So that, given the property of the CML (i.e., tangent to the efficient frontier), and some U = f(P) curve, the optimal portfolio must be located at the tangent of U to CML, determining the adequate proportion between B and riskfree instrument. To illustrate this, let us compare the case of two investors, Investor #1, with utility function U1, being more risk averse than Investor #2, with utility function U2 (see Figure 4.10). … Practically speaking, the number of previous terms of the series (here, arbitrarily, 18 terms) should have to be optimized, and the parameter a updated for the successive forecasts. Moreover, if the data present irregularities in their succession (changes of trends, mean reversion, etc.), the AR process is unable to incorporate such phenomena and works poorly. The generalized form of the previous case, in order to forecast rt as a function of more than its previous observed value, can be represented as follows: This is called an AR(p) process, involving the previous p values of the series. There is no rule for determining p, provided it is not excessive (by application of the “parcimony principle”). The above relationship looks like a linear regression, but instead of regressing according to a series of independent variables, this regression uses previous values of the dependent variable itself, hence the “autoregression” name. 9.2 THE MOVING AVERAGE (MA) PROCESS Let us consider a series of returns consisting in pure socalled “random numbers” {t}, i.i.d., generally distributed following a normal distribution. 

Social Capital and Civil Society by Francis Fukuyama Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
Berlin Wall, bluecollar work, Fall of the Berlin Wall, feminist movement, Francis Fukuyama: the end of history, George Akerlof, German hyperinflation, Jane Jacobs, Joseph Schumpeter, Kevin Kelly, laborforce participation, low skilled workers, pvalue, postindustrial economy, principal–agent problem, RAND corporation, Silicon Valley, The Death and Life of Great American Cities, transaction costs, World Values Survey However, it is possible for a group to have an r p coefficient larger than 1. To take the earlier example of the religious sect that encourages honesty and reliability, if these traits are demanded of its members not just in their dealings with other members of the sect but generally in their dealings with other people, then there will be a positive spillover effect into the larger society. Again, Weber argued in effect that sectarian Puritans had an r p value greater than 1. The final factor affecting a society’s supply of social capital concerns not the internal cohesiveness of groups, but rather the way in which they relate to outsiders. Strong moral bonds within a group in some cases may actually serve to decrease the degree to which members of that group are able to trust outsiders and work effectively with them. A highly disciplined, wellorganized group sharing strong common values may be capable of highly coordinated collective action, and yet may nonetheless be a social liability. 
pages: 1,038 words: 137,468 
JavaScript Cookbook by Shelley Powers Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
Firefox, Google Chrome, hypertext link, pvalue, semantic web, web application, WebSocket DOCTYPE html> <html dir="ltr" lang="enUS"> <head> <title>Comparing Cookies and sessionStorage</title> <meta httpequiv="ContentType" content="text/html;charset=utf8" > <style> div { backgroundcolor: #ffff00; margin: 5px; width: 100px; padding: 1px; }</style> <script> window.onload=function() { document.getElementById("set").onclick=setData; document.getElementById("get").onclick=getData; document.getElementById("erase").onclick=removeData; } // set data for both session and cookie function setData() { var key = document.getElementById("key").value; var value = document.getElementById("value").value; // set sessionStorage var current = sessionStorage.getItem(key); if (current) { current+=value; } else { current=value; } sessionStorage.setItem(key,current); 470  Chapter 20: Persistence // set cookie current = getCookie(key); if (current) { current+=value; } else { current=value; } setCookie(key,current); } function getData() { try { var key = document.getElementById("key").value; // sessionStorage var value = sessionStorage.getItem(key); if (!value) value =""; document.getElementById("sessionstr").innerHTML="<p>" + value + "</p>"; // cookie value = getCookie(key); if (!value) value=""; document.getElementById("cookiestr").innerHTML="<p>" + value + "</p>"; } catch(e) { alert(e); } } function removeData() { var key = document.getElementById("key").value; // sessionStorage sessionStorage.removeItem(key); // cookie eraseCookie(key); } // set session cookie function setCookie(cookie,value) { var tmp=cookie + "=" + encodeURI(value) + ";path=/"; document.cookie=tmp; } // each cookie separated by semicolon; function getCookie(key) { var cookie = document.cookie; var first = cookie.indexOf(key+"="); // cookie exists if (first >= 0) { 20.4 Using sessionStorage for ClientSide Storage  471 var str = cookie.substring(first,cookie.length); var last = str.indexOf(";"); // if last cookie if (last < 0) last = str.length; // get cookie value str = str.substring(0,last).split("="); return decodeURI(str[1]); } else { return null; } } // set cookie date to the past to erase function eraseCookie (key) { var cookieDate = new Date(); cookieDate.setDate(cookieDate.getDate()  10); var tmp=key + "= ; expires="+cookieDate.toGMTString()+"; path=/"; document.cookie=tmp; } </script> </head> <body> <form> <label for="key"> Enter key:</label> <input type="text" id="key" /> <br /> <br /> <label for="value">Enter value:</label> <input type="text" id="value" /><br /><br /> </form> <button id="set">Set data</button> <button id="get">Get data</button> <button id="erase">Erase data</button> <div id="sessionstr"><p></p></div> <div id="cookiestr"><p></p></div> </body> Load the example page (it’s in the book examples) in Firefox 3.5 and up. 
pages: 303 words: 67,891 
Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms: Proceedings of the Agi Workshop 2006 by Ben Goertzel, Pei Wang Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
AI winter, artificial general intelligence, bioinformatics, brain emulation, combinatorial explosion, complexity theory, computer vision, conceptual framework, correlation coefficient, epigenetics, friendly AI, information retrieval, Isaac Newton, John Conway, Loebner Prize, Menlo Park, natural language processing, Occam's razor, pvalue, pattern recognition, performance metric, Ray Kurzweil, Rodney Brooks, semantic web, statistical model, strong AI, theory of mind, traveling salesman, Turing machine, Turing test, Von Neumann architecture, Y2K Semantic similarities within and across columns of the table seem to be at the same level of strength; however, an objective measure would be necessary to quantify this impression. How can we estimate the statistical significance of cooccurrence of the same words in top portions of two lists in each row of Table 2? Here is one easy way to estimate pvalues from above. Given the size of the English core, and assuming that each FrenchtoEnglish translation is a “blind shot” into the English core (nullhypothesis), we can estimate the probability to find one and the same word in toptwelve portions of both lists: p ~ 2*12*12 / 8,236 = 0.035 (we included the factor 2, because there are two possible ways of aligning the lists with respect to each other4). Therefore, the pvalue of the case of word repetition that we see in Table 2 is smaller than 0.035, at least. In conclusion, we have found significant correlations among sorted lists across languages for each of the three PCs. 
pages: 436 words: 123,488 
Overdosed America: The Broken Promise of American Medicine by John Abramson Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
germ theory of disease, Louis Pasteur, medical malpractice, medical residency, meta analysis, metaanalysis, pvalue, placebo effect, profit maximization, profit motive, RAND corporation, randomized controlled trial, stem cell, Thomas Kuhn: the structure of scientific revolutions But then, rather than addressing these serious complications, the authors dismissed them with a most unusual statement: “The difference in major cardiovascular events in the VIGOR trial [of Vioxx] may reflect the play of chance” (italics mine) because “the number of cardiovascular events was small (less than 70).” The comment that a statistically significant finding “may reflect the play of chance” struck me as very odd. Surely the experts who wrote the review article knew that the whole purpose of doing statistics is to determine the degree of probability and the role of chance. Anyone who has taken Statistics 101 knows that p values of .05 or less (p < .05) are considered statistically significant. In this case it means that if the VIGOR study were repeated 100 times, more than 95 of those trials would show that the people who took Vioxx had at least twice as many heart attacks, strokes, and death from any cardiovascular event than the people who took naproxen. And in more than 99 out of those 100 studies, the people who took Vioxx would have at least four times as many heart attacks as the people who took naproxen. … Box 1 Auckland, New Zealand http://www.harpercollins.co.nz United Kingdom HarperCollins Publishers Ltd. 7785 Fulham Palace Road London, W6 8JB, UK http://www.harpercollins.co.uk United States HarperCollins Publishers Inc. 10 East 53rd Street New York, NY 10022 http://www.harpercollins.com FOOTNOTE *The standard way to determine whether a treatment has a significant effect is to calculate the probability that the observed difference in outcome (improvement or side effect) between the patients in the group that received the new treatment and the group that received the old treatment (or placebo) would have happened by chance if, in fact, the treatment really had no effect whatsoever. The conventional cutoff for determining statistical significance is a probability (p) of the observed difference between the groups occurring purely by chance less than 5 times out of 100 trials, or p < .05. This translates to: “the probability that this difference will occur at random is less than 5 chances in 100 trials.” The smaller the p value, the less likely it is that the difference between the groups happened by chance, and therefore the stronger—i.e., the more statistically significant—the finding. *The blood levels of all three kinds of cholesterol (total, LDL, and HDL) are expressed as “mg/dL,” meaning the number of milligrams of cholesterol present in onetenth of a liter of serum (the clear liquid that remains after the cells have been removed from a blood sample). 

Quantitative Trading: How to Build Your Own Algorithmic Trading Business by Ernie Chan Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
algorithmic trading, asset allocation, automated trading system, backtesting, Black Swan, Brownian motion, business continuity plan, compound rate of return, Elliott wave, endowment effect, fixed income, generalpurpose programming language, index fund, Long Term Capital Management, loss aversion, pvalue, paper trading, price discovery process, quantitative hedge fund, quantitative trading / quantitative ﬁnance, random walk, Ray Kurzweil, Renaissance Technologies, riskadjusted returns, Sharpe ratio, short selling, statistical arbitrage, statistical model, systematic trading, transaction costs The following code fragment, however, tests for correlation between the two time series: % A test for correlation. dailyReturns=(adjclslag1(adjcls))./lag1(adjcls); [R,P]=corrcoef(dailyReturns(2:end,:)); % R = % % 1.0000 % 0.4849 0.4849 1.0000 P1: JYS c07 JWBK321Chan September 24, 2008 14:4 Printer: Yet to come Special Topics in Quantitative Trading % % % P = % % % 1 0 133 0 1 % The P value of 0 indicates that the two time series % are significantly correlated. Stationarity is not limited to the spread between stocks: it can also be found in certain currency rates. For example, the Canadian dollar/Australian dollar (CAD/AUD) crosscurrency rate is quite stationary, both being commodities currencies. Numerous pairs of futures as well as well as fixedincome instruments can be found to be cointegrating as well. 
pages: 263 words: 75,455 
Quantitative Value: A Practitioner's Guide to Automating Intelligent Investment and Eliminating Behavioral Errors by Wesley R. Gray, Tobias E. Carlisle Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
Albert Einstein, Andrei Shleifer, asset allocation, Atul Gawande, backtesting, Black Swan, capital asset pricing model, Checklist Manifesto, cognitive bias, compound rate of return, corporate governance, correlation coefficient, credit crunch, Daniel Kahneman / Amos Tversky, discounted cash flows, Eugene Fama: efficient market hypothesis, forensic accounting, hindsight bias, Louis Bachelier, pvalue, passive investing, performance metric, quantitative hedge fund, random walk, Richard Thaler, riskadjusted returns, Robert Shiller, Robert Shiller, shareholder value, Sharpe ratio, short selling, statistical model, systematic trading, The Myth of the Rational Market, time value of money, transaction costs We control for general market risk using the capital asset pricing model2; we adjust for market, size, and value exposures with the Fama and French threefactor model3; we account for momentum using the fourfactor model4; and, finally, we account for liquidity by adding the Lubos Pastor and Robert Stambaugh marketwide liquidity factor to create the comprehensive fivefactor model.5 Figures 12.10(a) and (b) confirm that the Quantitative Value strategy consistently generates alpha on rolling 5 and 10year bases, regardless of the model we choose to inspect. On a rolling 5year basis there are only a few short instances where the strategy's performance does not add value after controlling for risk. The 10year rolling chart tells the story vividly: over the longterm, Quantitative Value has consistently created value for investors. Table 12.5 shows the full sample coefficient estimates for the four assetpricing models. We set out Pvalues below each estimate and represent the probability of seeing the estimate given the null hypothesis is zero. MKTRF represents the excess return on the marketweight returns of all New York Stock Exchange (NYSE)/American Stock Exchange (AMEX)/Nasdaq stocks. SMB is a long/short factor portfolio that captures exposures to small capitalization stocks. HML is a long/short factor portfolio that controls for exposure to high book valuetomarket capitalization stocks. 
pages: 242 words: 68,019 
Why Information Grows: The Evolution of Order, From Atoms to Economies by Cesar Hidalgo Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
Ada Lovelace, Albert Einstein, Arthur Eddington, Claude Shannon: information theory, David Ricardo: comparative advantage, Douglas Hofstadter, frictionless, frictionless market, George Akerlof, Gödel, Escher, Bach, income inequality, income per capita, invention of the telegraph, invisible hand, Isaac Newton, James Watt: steam engine, Jane Jacobs, job satisfaction, John von Neumann, New Economic Geography, Norbert Wiener, pvalue, phenotype, price mechanism, Richard Florida, Ronald Coase, Silicon Valley, Simon Kuznets, Skype, statistical model, Steve Jobs, Steve Wozniak, Steven Pinker, The Market for Lemons, The Nature of the Firm, The Wealth of Nations by Adam Smith, total factor productivity, transaction costs, workingage population Here we consider a country to be an exporter of a product if its percapita exports of that product are at least 25 percent of the world’s average per capita exports of that product. This allows us to control for the size of the product’s global market and the size of the country’s population. 5. In the case of Honduras and Argentina the probability of the observed overlap (what is known academically as its pvalue) is 4.4 × 10–4. The same probability is 2 × 10–2 for the overlap observed between Honduras and the Netherlands and 4 × 10–3 for the overlap observed between Argentina and the Netherlands. 6. César A. Hidalgo and Ricardo Hausmann, “The Building Blocks of Economic Complexity,” Proceedings of the National Academy of Sciences 106, no. 26 (2009): 10570–10575. 7. The idea of related varieties is popular in the literature of regional economic development and strategic management. 
pages: 245 words: 12,162 
In Pursuit of the Traveling Salesman: Mathematics at the Limits of Computation by William J. Cook Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
complexity theory, computer age, four colour theorem, index card, John von Neumann, linear programming, NPcomplete, pvalue, RAND corporation, Richard Feynman, Richard Feynman, traveling salesman, Turing machine To compute this value, we find the minimum of the four sums trip({2, 3, 4, 5}, 2) + cost(2,6) trip({2, 3, 4, 5}, 3) + cost(3,6) trip({2, 3, 4, 5}, 4) + cost(4,6) trip({2, 3, 4, 5}, 5) + cost(5,6) corresponding to the possible choices for the nexttolast city in the subpath from 1 to 6, that is, we optimally travel to the nexttolast city then travel over to city 6. This construction of a fivecity tr i pvalue from several fourcity values is the heart of the HeldKarp method. The algorithm proceeds as follows. We first compute all onecity values: these are easy, for example, tr i p({2}, 2) is just cos t(1, 2). Next, we use the onecity values to compute all twocity values. Then we use the twocity values to compute all threecity values, and on up the line. When we finally get to the (n − 1)city values, we can read off the cost of an optimal tour: it is the minimum of the sums trip({2,3,. . . , n}, 2) + cost(2,1) trip({2,3,. . . , n}, 3) + cost(3,1) ··· trip({2,3,. . . , n}, n) + cost(n,1) where the cost term accounts for the return trip back to city 1. 
pages: 1,088 words: 228,743 
Expected Returns: An Investor's Guide to Harvesting Market Rewards by Antti Ilmanen Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
Andrei Shleifer, asset allocation, assetbacked security, availability heuristic, backtesting, balance sheet recession, bank run, banking crisis, barriers to entry, Bernie Madoff, Black Swan, Bretton Woods, buy low sell high, capital asset pricing model, capital controls, Carmen Reinhart, central bank independence, collateralized debt obligation, commodity trading advisor, corporate governance, credit crunch, Credit Default Swap, credit default swaps / collateralized debt obligations, debt deflation, deglobalization, delta neutral, demand response, discounted cash flows, disintermediation, diversification, diversified portfolio, dividendyielding stocks, equity premium, Eugene Fama: efficient market hypothesis, fiat currency, financial deregulation, financial innovation, financial intermediation, fixed income, Flash crash, framing effect, frictionless, frictionless market, George Akerlof, global reserve currency, Google Earth, high net worth, hindsight bias, Hyman Minsky, implied volatility, income inequality, incomplete markets, index fund, inflation targeting, interest rate swap, invisible hand, Kenneth Rogoff, laissezfaire capitalism, law of one price, Long Term Capital Management, loss aversion, margin call, market bubble, market clearing, market friction, market fundamentalism, market microstructure, mental accounting, merger arbitrage, mittelstand, moral hazard, New Journalism, oil shock, pvalue, passive investing, performance metric, Ponzi scheme, prediction markets, price anchoring, price stability, principal–agent problem, private sector deleveraging, purchasing power parity, quantitative easing, quantitative trading / quantitative ﬁnance, random walk, reserve currency, Richard Thaler, risk tolerance, riskadjusted returns, risk/return, riskless arbitrage, Robert Shiller, Robert Shiller, savings glut, Sharpe ratio, short selling, sovereign wealth fund, statistical arbitrage, statistical model, stochastic volatility, systematic trading, The Great Moderation, The Myth of the Rational Market, too big to fail, transaction costs, tulip mania, value at risk, volatility arbitrage, volatility smile, workingage population, Y2K, yield curve, zerocoupon bond Given that longterm Treasury yields are below 4%, few observers would extrapolate the realized 4.7% average bond returns into the future. Similar considerations suggest that we might reduce the CPI and D/P components for equities. The fourth column shows that using 2.3% CPI (consensus forecast for longterm inflation) and 2.0% D/P, a forwardlooking measure predicts only 5.6% nominal equity returns for the long term. Admittedly the D/P value could be raised if we use a broader carry measure including net share buybacks, so I add 0.75% to the estimate (and call it “D/P+”). Even more bullish return forecasts than 6.4% would have to rely on growth optimism (beyond the historical 1.3% rate of real earningspershare growth) or expected further P/E expansion in the coming decades (my analysis assumes none). More generally, these building blocks give us a useful framework for debating the key components of future equity returns. … Should long and short portfolios have simply equal nominal amounts, equal return volatilities, or equal betas? One crucial question is whether persistent industry sector biases should be allowed or whether sector neutrality should be pursued. Sector neutrality. Practitioner studies highlight the empirical benefits of sectorneutral approaches. Yet, academic studies and many popular investment products (FF and LSV, MSCIBarra and S&P value/growth indices, and the RAFI fundamental index) do nothing to impose sector neutrality. Without any such adjustments, persistent industry concentrations are possible in the long–short portfolio. For example, in early 2008, the long (value) portfolio heavily overweighted finance stocks while the short (growth) portfolio overweighted energy stocks. Such sector biases may or may not boost average returns but they pretty clearly impair value portfolio diversification and thus raise its volatility. 

Monte Carlo Simulation and Finance by Don L. McLeish Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
BlackScholes formula, Brownian motion, capital asset pricing model, compound rate of return, discrete time, distributed generation, finite state, frictionless, frictionless market, implied volatility, incomplete markets, invention of the printing press, martingale, pvalue, random walk, Sharpe ratio, short selling, stochastic process, stochastic volatility, the market place, transaction costs, value at risk, Wiener process, zerocoupon bond Evaluate the Chisquared statistic χ2obs for a test that these points are independent uniform on the cube where we divide the cube into 8 subcubes, each having sides of length 1/2. Carry out the test by finding P [χ2 > χ2obs ] where χ2 is a random chisquared variate with the appropriate number of degrees of freedom. This quantity P [χ2 > χ2obs ] is usually referrred to as the “significance probability” or “pvalue” for the test. If we suspected too much uniformity to be consistent with assumption of independent uniform, we might use the other tail of the test, i.e. evaluate P [χ2 < χ2obs ]. Do so and comment on your results. 
pages: 354 words: 26,550 
HighFrequency Trading: A Practical Guide to Algorithmic Strategies and Trading Systems by Irene Aldridge Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
algorithmic trading, asset allocation, assetbacked security, automated trading system, backtesting, Black Swan, Brownian motion, business process, capital asset pricing model, centralized clearinghouse, collapse of Lehman Brothers, collateralized debt obligation, collective bargaining, diversification, equity premium, fault tolerance, financial intermediation, fixed income, high net worth, implied volatility, index arbitrage, interest rate swap, inventory management, law of one price, Long Term Capital Management, Louis Bachelier, margin call, market friction, market microstructure, martingale, New Journalism, pvalue, paper trading, performance metric, profit motive, purchasing power parity, quantitative trading / quantitative ﬁnance, random walk, Renaissance Technologies, risk tolerance, riskadjusted returns, risk/return, Sharpe ratio, short selling, Small Order Execution System, statistical arbitrage, statistical model, stochastic process, stochastic volatility, systematic trading, trade route, transaction costs, value at risk, yield curve Table 4.5 reports summary statistics for EUR/USD order flows observed by Citibank and sampled at the weekly frequency between January 1993 and July 1999: A) statistics for weekly EUR/USD order flow aggregated across Citibank’s corporate, trading, and investing customers; and B) order flows from enduser segments cumulated over a week. The last four columns on the right report autocorrelations i at lag i and pvalues for the null that (i = 0). The summary statistics on the order flow data are from Evans and Lyons (2007), who define order flow as the total value of EUR/USD purchases (in USD millions) initiated against Citibank’s quotes. Daily Dollar Volume in Most Active Foreign Exchange Products on TABLE 4.4 CME Electronic Trading (Globex) on 6/12/2009 Computed as Average Price Times Total Contract Volume Reported by CME Currency Futures Daily Volume (in USD thousands) MiniFutures Daily Volume (in USD thousands) Australian Dollar British Pound Canadian Dollar Euro Japanese Yen New Zealand Dollar Swiss Franc 5,389.8 17,575.6 6,988.1 32,037.9 8,371.5 426.5 4,180.6 N/A N/A N/A 525.3 396.2 N/A N/A 45 3.722 −3.715 549.302 −529.055 634.918 −692.419 1710.163 −2024.28 972.106 −629.139 535.32 −874.15 1881.284 −718.895 −0.043 1.234 −16.774 108.685 −59.784 196.089 −4.119 346.296 11.187 183.36 19.442 146.627 15.85 273.406 Maximum Minimum −0.696 9.246 −0.005 3.908 0.026 8.337 0.392 5.86 −1.079 11.226 0.931 9.253 0.105 3.204 Skewness or Kurtosis* −0.037 (0.434) 0.072 (0.223) −0.021 (0.735) −0.098 (0.072) 0.096 (0.085) 0.061 (0.182) −0.061 (0.287) 1 −0.04 (0.608) 0.089 (0.124) 0.024 (0.602) 0.024 (0.660) −0.024 (0.568) 0.107 (0.041) 0.027 (0.603) 2 0.028 (0.569) −0.038 (0.513) 0.126 (0.101) 0.015 (0.747) −0.03 (0.536) −0.03 (0.550) 0.025 (0.643) 4 Autocorrelations Lag −0.028 (0.562) 0.103 (0.091) −0.009 (0.897) 0.083 (0.140) −0.016 (0.690) −0.014 (0.825) −0.015 (0.789) 8 *Skewness of order ﬂows measures whether the ﬂows skew toward either the positive or the negative side of their mean, and kurtosis indicates the likelihood of extremely large or small order ﬂows. 
pages: 292 words: 85,151 
Exponential Organizations: Why New Organizations Are Ten Times Better, Faster, and Cheaper Than Yours (And What to Do About It) by Salim Ismail, Yuri van Geest Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
23andMe, 3D printing, Airbnb, Amazon Mechanical Turk, Amazon Web Services, augmented reality, autonomous vehicles, Baxter: Rethink Robotics, bioinformatics, bitcoin, Black Swan, blockchain, Burning Man, business intelligence, business process, call centre, chief data officer, Clayton Christensen, clean water, cloud computing, cognitive bias, collaborative consumption, collaborative economy, corporate social responsibility, crosssubsidies, crowdsourcing, cryptocurrency, dark matter, Dean Kamen, dematerialisation, discounted cash flows, distributed ledger, Edward Snowden, Elon Musk, en.wikipedia.org, ethereum blockchain, Galaxy Zoo, game design, Google Glasses, Google Hangouts, Google X / Alphabet X, gravity well, hiring and firing, Hyperloop, industrial robot, Innovator's Dilemma, Internet of things, Iridium satellite, Isaac Newton, Jeff Bezos, Kevin Kelly, Kickstarter, knowledge worker, Kodak vs Instagram, Law of Accelerating Returns, Lean Startup, life extension, loose coupling, loss aversion, Lyft, Mark Zuckerberg, market design, means of production, minimum viable product, natural language processing, Netflix Prize, Network effects, new economy, Oculus Rift, offshore financial centre, pvalue, PageRank, pattern recognition, Paul Graham, Peter H. Diamandis: Planetary Resources, Peter Thiel, prediction markets, profit motive, publish or perish, Ray Kurzweil, recommendation engine, RFID, ride hailing / ride sharing, risk tolerance, Ronald Coase, Second Machine Age, selfdriving car, sharing economy, Silicon Valley, skunkworks, Skype, smart contracts, Snapchat, social software, software is eating the world, speech recognition, stealth mode startup, Stephen Hawking, Steve Jobs, subscription business, supplychain management, TaskRabbit, telepresence, telepresence robot, Tony Hsieh, transaction costs, Tyler Cowen: Great Stagnation, urban planning, WikiLeaks, winnertakeall economy, X Prize, Y Combinator As a result, organizations are not only much more agile, they are also better at learning and unlearning due to the diversity and volume of a flexible workforce. Ideas are also able to circulate much faster. Why Important? Dependencies or Prerequisites • Increase loyalty to ExO • Drives exponential growth • Validates new ideas, and learning • Allows agility and rapid implementation • Amplifies ideation • MTP • Engagement • Authentic and transparent leadership • Low threshold to participate • P2P value creation Algorithms In 2002, Google’s revenues were less than a halfbillion dollars. Ten years later, its revenues had jumped 125x and the company was generating a halfbillion dollars every three days. At the heart of this staggering growth was the PageRank algorithm, which ranks the popularity of web pages. (Google doesn’t gauge which page is better from a human perspective; its algorithms simply respond to the pages that deliver the most clicks.) 
pages: 305 words: 89,103 
Scarcity: The True Cost of Not Having Enough by Sendhil Mullainathan Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
American Society of Civil Engineers: Report Card, Andrei Shleifer, Cass Sunstein, clean water, computer vision, delayed gratification, double entry bookkeeping, Exxon Valdez, fault tolerance, happiness index / gross national happiness, impulse control, indoor plumbing, inventory management, knowledge worker, late fees, linear programming, mental accounting, microcredit, pvalue, payday loans, purchasing power parity, randomized controlled trial, Report Card for America’s Infrastructure, Richard Thaler, Saturday Night Live, Walter Mischel, Yogi Berra R. Flynn, “Massive IQ Gains in 14 Nations: What IQ Tests Really Measure,” Psychological Bulletin 101 (1987): 171–91. A forceful case for environmental and cultural influences on IQ is Richard Nisbett’s Intelligence and How to Get It: Why Schools and Cultures Count (New York: W. W. Norton, 2010). people in a New Jersey mall: These experiments are summarized along with details on sample sizes and pvalues in Anandi Mani, Sendhil Mullainathan, Eldar Shafir, and Jiaying Zhao, “Poverty Impedes Cognitive Function” (working paper, 2012). unable to come up with $2,000 in thirty days: A. Lusardi, D. J. Schneider, and P. Tufano, Financially Fragile Households: Evidence and Implications (National Bureau of Economic Research, Working Paper No. 17072, May 2011). the effects were equally big: For those interested in the magnitude, the effect size ranged between Cohen’s d of 0.88 and 0.94. 
pages: 271 words: 83,944 
The Sellout: A Novel by Paul Beatty Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
affirmative action, cognitive dissonance, conceptual framework, desegregation, El Camino Real, haute couture, illegal immigration, Lao Tzu, late fees, pvalue, publish or perish, rolodex, Ronald Reagan, Rosa Parks, telemarketer, theory of mind, War on Poverty, white flight Back then he was an assistant professor in urban studies, at UC Brentwood, living in Larchmont with the rest of the L.A. intellectual class, and hanging out in Dickens doing field research for his first book, Blacktopolis: The Intransigence of AfricanAmerican Urban Poverty and Baggy Clothes. “I think an examination of the confluence of independent variables on income could result in some interesting r coefficients. Frankly, I wouldn’t be surprised by p values in the .75 range.” Despite the smug attitude, Pops took a liking to Foy right away. Though Foy was born and raised in Michigan, it wasn’t often Dad found somebody in Dickens who knew the difference between a ttest and an analysis of variance. After debriefing over a box of donut holes, everyone—locals and Foy included—agreed to meet on a regular basis, and the Dum Dum Donut Intellectuals were born. 
pages: 320 words: 33,385 
Market Risk Analysis, Quantitative Methods in Finance by Carol Alexander Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
asset allocation, backtesting, barriers to entry, Brownian motion, capital asset pricing model, constrained optimization, credit crunch, Credit Default Swap, discounted cash flows, discrete time, diversification, diversified portfolio, en.wikipedia.org, implied volatility, interest rate swap, market friction, market microstructure, pvalue, performance metric, quantitative trading / quantitative ﬁnance, random walk, risk tolerance, riskadjusted returns, risk/return, Sharpe ratio, statistical arbitrage, statistical model, stochastic process, stochastic volatility, transaction costs, value at risk, volatility smile, Wiener process, yield curve The probability value of the t statistics is also given for convenience, and this shows that whilst the constant term is not significant the log return on the S&P 500 is a very highly significant determinant of the Amex log returns. Table I.4.7 Coefficient estimates for the Amex and S&P 500 model Intercept S&P 500 rtn Coefficients Standard error t stat −00002 12885 00003 00427 −06665 301698 p value 05053 00000 Following the results in Table I.4.7, we may write the estimated model, with t ratios in parentheses, as Ŷ = −00002 + 12885 X −06665 301698 where X and Y are the daily log returns on the S&P 500 and on Amex, respectively. The Excel output automatically tests whether the explanatory variable should be included Introduction to Linear Regression 155 in the model, and with a t ratio of 30.1698 this is certainly the case. 
pages: 351 words: 123,876 
Beautiful Testing: Leading Professionals Reveal How They Improve Software (Theory in Practice) by Adam Goucher, Tim Riley Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
Albert Einstein, barriers to entry, Black Swan, call centre, continuous integration, Debian, en.wikipedia.org, Firefox, Grace Hopper, index card, Isaac Newton, natural language processing, pvalue, performance metric, revision control, six sigma, software as a service, software patent, the scientific method, Therac25, Valgrind, web application However, some bugs are more subtle, and so more sophisticated tests may be necessary. The recommendation is to start with the simplest tests and work up to more advanced tests. The simplest tests, besides being easiest to implement, are also the easiest to understand. A software developer is more likely to respond well to being told, “Looks like the average of your generator is 7 when it should be 8,” than to being told, “I’m getting a small pvalue from my KolmogorovSmirnov test.” Range Tests If a probability distribution has a limited range, the simplest thing to test is whether the output values fall in that range. For example, an exponential distribution produces only positive values. If your test detects a single negative value, you’ve found a bug. However, for other distributions, such as the normal, there are no theoretical bounds on the outputs; all output values are possible, though some values are exceptionally unlikely. 134 CHAPTER TEN There is one aspect of output ranges that cannot be tested effectively by blackbox testing: boundary values. 
pages: 336 words: 163,867 
How to Diagnose and Fix Everything Electronic by Michael Geier Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
pvalue, popular electronics, remote working If you find no voltage at all, there could be a little subregulator on the board to power the micro, and it might be bad. Chapter 11 AHunting We Will Go: Signal Tracing and Diagnosis 201 If you see voltage there (typically 5 volts, but possibly less and very occasionally more) but no oscillation, the crystal may be dead. Without a clock to drive it, the micro will sit there like a rock. If you do see oscillation, check that its peaktopeak (pp) value is fairly close to the total power supply voltage running the micro. If it’s a 5volt micro and the oscillation is 1 volt pp, the micro won’t get clocked. If you have power and a running micro, you should see some life someplace. Lots of products include small backup batteries on their boards. See Figure 111. These batteries keep the clock running and preserve user preferences. Loss of battery power causes resetting of data to the default states but doesn’t prevent the product from working. 
pages: 968 words: 224,513 
The Art of Assembly Language by Randall Hyde Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
pvalue, sorting algorithm, Von Neumann architecture, Y2K For example: static p:procedure( i:int32; c:char ) := &SomeProcedure; Note that SomeProcedure must be a procedure whose parameter list exactly matches p's parameter list (i.e., two value parameters, the first is an int32 parameter and the second is a char parameter). To indirectly call this procedure, you could use either of the following sequences: push( Value_for_i ); push( Value_for_c ); call( p ); or p( Value_for_i, Value_for_c ); The highlevel language syntax has the same features and restrictions as the highlevel syntax for a direct procedure call. The only difference is the actual call instruction HLA emits at the end of the calling sequence. Although all the examples in this section use static variable declarations, don't get the idea that you can declare simple procedure pointers only in the static or other variable declaration sections. 
pages: 1,606 words: 168,061 
Python Cookbook by David Beazley, Brian K. Jones Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
Firefox, iterative process, pvalue, web application Solution The ctypes module can be used to create Python callables that wrap around arbitrary memory addresses. The following example shows how to obtain the raw, lowlevel address of a C function and how to turn it back into a callable object: >>> import ctypes >>> lib = ctypes.cdll.LoadLibrary(None) >>> # Get the address of sin() from the C math library >>> addr = ctypes.cast(lib.sin, ctypes.c_void_p).value >>> addr 140735505915760 >>> # Turn the address into a callable function >>> functype = ctypes.CFUNCTYPE(ctypes.c_double, ctypes.c_double) >>> func = functype(addr) >>> func <CFunctionType object at 0x1006816d0> >>> # Call the resulting function >>> func(2) 0.9092974268256817 >>> func(0) 0.0 >>> Discussion To make a callable, you must first create a CFUNCTYPE instance. The first argument to CFUNCTYPE() is the return type. 
pages: 1,042 words: 266,547 
Security Analysis by Benjamin Graham, David Dodd Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
assetbacked security, backtesting, barriers to entry, capital asset pricing model, carried interest, collateralized debt obligation, collective bargaining, corporate governance, credit crunch, Credit Default Swap, credit default swaps / collateralized debt obligations, diversification, diversified portfolio, fear of failure, financial innovation, fixed income, full employment, index fund, invisible hand, Joseph Schumpeter, locking in a profit, Long Term Capital Management, low cost carrier, moral hazard, mortgage debt, pvalue, riskadjusted returns, risk/return, secular stagnation, shareholder value, The Chicago School, the market place, the scientific method, The Wealth of Nations by Adam Smith, transaction costs, zerocoupon bond In a theoretical sense this is entirely true, but in practice it may not be true at all, because a division of capitalization into senior securities and common stock may have a real advantage over a single commonstock issue. This subject will receive extended treatment under the heading of “Capitalization Structure” in Chap. 40. The distinction between the idea just suggested and our “rule of maximum valuation” may be clarified as follows: 1. Assume Company X = Company Y 2. Company X has preferred (P) and common (C); Company Y has common only (C') 3. Then it would appear that Value of P + value of C = value of C' since each side of the equation represents equal things, namely the total value of each company. But this apparent relationship may not hold good in practice because the preferredandcommon capitalization method may have real advantages over a single commonstock issue. On the other hand, our “rule of maximum valuation” merely states that the value of P alone cannot exceed value of C'. 
pages: 892 words: 91,000 
Valuation: Measuring and Managing the Value of Companies by Tim Koller, McKinsey, Company Inc., Marc Goedhart, David Wessels, Barbara Schwimmer, Franziska Manoury Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr
air freight, barriers to entry, Basel III, BRICs, business climate, business process, capital asset pricing model, capital controls, cloud computing, compound rate of return, conceptual framework, corporate governance, corporate social responsibility, credit crunch, Credit Default Swap, discounted cash flows, distributed generation, diversified portfolio, energy security, equity premium, index fund, iterative process, Long Term Capital Management, market bubble, market friction, meta analysis, metaanalysis, new economy, pvalue, performance metric, Ponzi scheme, price anchoring, purchasing power parity, quantitative easing, risk/return, Robert Shiller, Robert Shiller, shareholder value, six sigma, sovereign wealth fund, speech recognition, technology bubble, time value of money, too big to fail, transaction costs, transfer pricing, value at risk, yield curve, zerocoupon bond At the end of the research phase, there are three possible outcomes: success combined with an increase in the value of a marketable drug to $5,594 million, success combined with a decrease 27 The formula for estimating the upward probability is: (1 + k)T − d 1.073 − 0.77 = = 0.86 u−d 1.30 − 0.77 where k is the expected return on the asset. 816 FLEXIBILITY EXHIBIT 35.18 Decision Tree: R&D Option with Technological and Commercial Risk $ million Technological risk event Commercial risk event Research phase TTesting phase Marketing VValue up 74% NPV = 7,104 1 – q* = 26% V lue down Va NPV = 4,164 q* = p= Value up V q* = 74% Success 40% NPV = 1,936 1–p= 60% Failure NPV = 0 Success p = 15% 1 – q* = 26% Value down V 1 – p = 85% Failure r VValue up 74% NPV = 4,164 1 – q* = 26% V lue down Va NPV = 2,416 q* = p= NPV = 120 Success 40% NPV = 1,029 1–p= NPV = 0 Decision event 60% Failure NPV = 0 Note: NPV = net present value of project q* = binomial (riskneutral) probability of an increase in marketable drug value p = probability of technological success in the value of a marketable drug to $3,327 million, and failure leading to a drug value of $0. 