backtesting

40 results back to index


Quantitative Trading: How to Build Your Own Algorithmic Trading Business by Ernie Chan

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

algorithmic trading, asset allocation, automated trading system, backtesting, Black Swan, Brownian motion, business continuity plan, compound rate of return, Elliott wave, endowment effect, general-purpose programming language, index fund, Long Term Capital Management, loss aversion, p-value, paper trading, price discovery process, quantitative hedge fund, quantitative trading / quantitative finance, random walk, Ray Kurzweil, Renaissance Technologies, risk-adjusted returns, Sharpe ratio, short selling, statistical arbitrage, statistical model, systematic trading, transaction costs

But more than just performing due diligence, doing the backtest yourself allows you to experiment with variations of the original strategy, thereby refining and improving the strategy. In this chapter, I will describe the common platforms that can be used for backtesting, various sources of historical data useful for backtesting, a minimal set of standard performance measures that a backtest should provide, common pitfalls to avoid, and simple refinements and improvements to strategies. A few fully developed backtesting examples will also be presented to illustrate the principles and techniques described. 31 P1: JYS c03 JWBK321-Chan September 24, 2008 13:52 Printer: Yet to come 32 QUANTITATIVE TRADING COMMON BACKTESTING PLATFORMS There are numerous commercial platforms that are designed for backtesting, some of them costing tens of thousands of dollars.

You can see where the maximum drawdown and P1: JYS c03 JWBK321-Chan September 24, 2008 13:52 Printer: Yet to come 50 QUANTITATIVE TRADING maximum drawdown duration occurred in this plot of the cumulative returns in Figure 3.1. 0.6 Cumulative Returns 0.5 0.4 10.53% max drawdown 0.3 0.2 497 days max drawdown duration 0.1 0 −0.1 0 200 400 600 800 Days 1000 1200 1400 1600 FIGURE 3.1 Maximum drawdown and maximum drawdown duration for Example 3.4 COMMON BACKTESTING PITFALLS TO AVOID Backtesting is the process of creating the historical trades given the historical information available at that time, and then finding out what the subsequent performance of those trades is. This process seems easy given that the trades were made using a computer algorithm in our case, but there are numerous ways in which it can go wrong. Usually, an erroneous backtest would produce a historical performance that is better than what we would have obtained in actual trading. We have already seen how survivorship bias in the data used for backtesting can result in inflated performance. There are, P1: JYS c03 JWBK321-Chan September 24, 2008 Backtesting 13:52 Printer: Yet to come 51 however, other common pitfalls related to how the backtest program is written, or more fundamentally, to how you construct your trading strategy.

24 vii P1: JYS fm JWBK321-Chan September 24, 2008 13:43 Printer: Yet to come viii Does the Strategy Suffer from Data-Snooping Bias? Does the Strategy “Fly under the Radar" of Institutional Money Managers? CONTENTS 25 27 Summary 28 CHAPTER 3 Backtesting 31 Common Backtesting Platforms 32 Excel 32 MATLAB 32 TradeStation 35 High-End Backtesting Platforms 35 Finding and Using Historical Databases 36 Are the Data Split and Dividend Adjusted? 36 Are the Data Survivorship Bias Free? 40 Does Your Strategy Use High and Low Data? 42 Performance Measurement 43 Common Backtesting Pitfalls to Avoid 50 Look-Ahead Bias 51 Data-Snooping Bias 52 Transaction Costs 60 Strategy Refinement 65 Summary 66 CHAPTER 4 Setting Up Your Business 69 Business Structure: Retail or Proprietary? 69 Choosing a Brokerage or Proprietary Trading Firm 71 Physical Infrastructure 75 Summary 77 CHAPTER 5 Execution Systems 79 What an Automated Trading System Can Do for You 79 Building a Semiautomated Trading System 81 Building a Fully Automated Trading System 84 Minimizing Transaction Costs 87 P1: JYS fm JWBK321-Chan September 24, 2008 13:43 Printer: Yet to come ix Contents Testing Your System by Paper Trading 89 Why Does Actual Performance Diverge from Expectations?

 

pages: 443 words: 51,804

Handbook of Modeling High-Frequency Data in Finance by Frederi G. Viens, Maria C. Mariani, Ionut Florescu

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

algorithmic trading, asset allocation, automated trading system, backtesting, Black-Scholes formula, Brownian motion, business process, continuous integration, corporate governance, discrete time, distributed generation, Flash crash, housing crisis, implied volatility, incomplete markets, linear programming, mandelbrot fractal, market friction, market microstructure, martingale, Menlo Park, p-value, pattern recognition, performance metric, principal–agent problem, random walk, risk tolerance, risk/return, short selling, statistical model, stochastic process, stochastic volatility, transaction costs, value at risk, volatility smile, Wiener process

Published 2012 by John Wiley & Sons, Inc. 421 422 Augmented log likelihood, 172 Autocorrelation, of GARCH filtering, 202 Autocorrelation function (ACF), 177, 221 for minute data, 202–203 Automated trading platforms, 235 Automated trading systems, 63–64, 68 Autoregressive conditional duration (ACD) model, 27–28 Autoregressive conditionally heteroskedastic (ARCH) models, 272 Average daily volume (ADV), 34 classification of equity based on, 45 Average estimator, 279 BAC data series, DFA and Hurst methods applied to, 155 Backtest, evaluating results of, 192 Backtest algorithm, 189 Backtest failure ratio, 192 Backtesting, 188–203 Backtest null hypothesis, 202 Backtest results, using GARCH, 204–205 Backtest result tables, 192–195, 199–200 Backtest variant, 195–196 Balanced capital structure, 59 Balanced scorecards (BSCs), 48, 52–53, 69. See also Board balanced scorecards (BSCs); BSC entries; Enterprise BSC; Executive BSC Ball solution, 391–399 Banach spaces, 349, 350, 351, 386, 387–388, 389 Bandwidth choices, 269 Barany, Ernest, xiii, 119, 327 Bartlett-type kernels, 261, 263 Base learner, 48 Bear Stearns crash, high-frequency data corresponding to, 121, 131–132 Bear Stearns crash week, high-frequency data from, 148–160 Beccar Varela, Maria Pia, xiii, 119, 327 Bernoulli LRT, 191.

As a result, the number of tests that can be done on a fixed amount of daily data will shrink substantially when the time horizon increases. To extract more information on the violations, we can implement the backtest algorithm n times, each with a different starting point in the time index (i.e., t = C, C + 1, . . . , C + n − 1). Each of the n backtests will contain the same total number of tests Y , but a different number of violations y1 , . . . , yn . 7.5.4 n-DAY HORIZON We list the actual violation ratios and the corresponding p-values of the likelihood ratio test (LRT) in Tables 7.4–7.13. All VaR backtesting is based on S&P500 daily close prices from January 1, 1991, to December 31, 2009. A thousand samples are used to calibrate each skewed t distribution. Depending on the length of the time horizon, the total number of backtests ranges between 500 and 1900. For an n-day horizon, we have n groups of results representing different starting points in the time index.

Setting the confidence level of the LRT to be 95% implies that even if the model is perfect, we will still have a 5% chance of observing LRT failures (i.e., type I errors). Since the failure ratio of the backtests 5/216≈2.3% is much lower than 5%, we consider our model performance is satisfactory. TABLE 7.4 Backtest Results: Two Days Violation Ratio q Group 1 Group 2 12 Since 0.05 0.059 0.050 0.025 0.026 0.027 0.01 0.010 0.010 p-Value 0.005 0.005 0.007 0.05 0.076 0.975 0.025 0.809 0.596 0.01 0.827 0.827 0.005 0.864 0.277 the sum of i.i.d. Bernoulli r.v. is a binomial r.v., another alternative is a standard two-sided binomial test, as described by Casella and Berger (2002). 13 We reject the null hypothesis when the p-value is <0.05. 193 7.5 Backtesting TABLE 7.5 Backtest Results: Three Days Violation Ratio q Group 1 Group 2 Group 3 0.05 0.057 0.054 0.052 0.025 0.027 0.032 0.025 0.01 0.012 0.013 0.012 p-Value 0.005 0.008 0.009 0.007 0.05 0.260 0.532 0.807 0.025 0.663 0.143 0.921 0.01 0.513 0.239 0.513 0.005 0.175 0.091 0.313 TABLE 7.6 Backtest Results: Four Days Violation Ratio q Group 1 Group 2 Group 3 Group 4 0.05 0.055 0.041 0.052 0.053 0.025 0.025 0.030 0.023 0.030 0.01 0.012 0.013 0.013 0.020 p-Value 0.005 0.004 0.006 0.005 0.008 0.05 0.490 0.202 0.801 0.690 0.025 0.942 0.378 0.728 0.378 0.05 0.514 0.722 0.630 0.849 0.400 0.025 0.828 0.804 0.485 0.804 0.048 0.01 0.624 0.426 0.426 0.006 0.005 0.730 0.574 0.902 0.170 TABLE 7.7 Backtest Results: Five Days Violation Ratio q Group 1 Group 2 Group 3 Group 4 Group 5 0.05 0.045 0.053 0.046 0.052 0.057 0.025 0.024 0.026 0.029 0.026 0.037 0.01 0.007 0.012 0.011 0.016 0.012 p-Value 0.005 0.005 0.008 0.008 0.011 0.005 0.01 0.317 0.612 0.876 0.136 0.612 0.005 0.913 0.293 0.293 0.059 0.913 TABLE 7.8 Backtest Results: Six Days Violation Ratio q Group 1 Group 2 Group 3 Group 4 Group 5 Group 6 0.05 0.052 0.049 0.056 0.048 0.057 0.051 0.025 0.019 0.022 0.025 0.021 0.030 0.032 0.01 0.011 0.013 0.011 0.014 0.011 0.011 p-Value 0.005 0.006 0.008 0.008 0.011 0.006 0.006 0.05 0.785 0.927 0.529 0.782 0.421 0.927 0.025 0.318 0.649 0.949 0.470 0.422 0.298 0.01 0.783 0.514 0.783 0.310 0.783 0.783 0.005 0.645 0.336 0.336 0.061 0.645 0.645 194 CHAPTER 7 Risk Forecasting with Multiple Timescales TABLE 7.9 Backtest Results: Seven Days Violation Ratio q Group 1 Group 2 Group 3 Group 4 Group 5 Group 6 Group 7 0.05 0.043 0.044 0.041 0.044 0.039 0.054 0.059 0.025 0.024 0.017 0.033 0.024 0.024 0.020 0.030 0.01 0.011 0.013 0.013 0.013 0.019 0.011 0.006 p-Value 0.005 0.007 0.006 0.004 0.006 0.013 0.007 0.006 0.05 0.418 0.546 0.308 0.546 0.218 0.696 0.337 0.025 0.890 0.187 0.238 0.890 0.890 0.477 0.503 0.01 0.799 0.508 0.508 0.508 0.075 0.799 0.257 0.005 0.459 0.857 0.654 0.857 0.029 0.459 0.857 TABLE 7.10 Backtest Results: Eight Days Violation Ratio q Group 1 Group 2 Group 3 Group 4 Group 5 Group 6 Group 7 Group 8 0.05 0.034 0.055 0.047 0.038 0.051 0.053 0.044 0.049 0.025 0.021 0.023 0.028 0.017 0.030 0.017 0.023 0.023 0.01 0.013 0.015 0.008 0.015 0.008 0.008 0.015 0.006 p-Value 0.005 0.006 0.006 0.004 0.006 0.004 0.006 0.008 0.004 0.05 0.089 0.618 0.733 0.218 0.933 0.770 0.576 0.899 0.025 0.586 0.811 0.728 0.235 0.528 0.235 0.811 0.811 0.05 0.158 0.649 0.492 0.096 0.822 0.511 0.824 0.824 0.355 0.025 0.127 0.875 0.647 0.631 0.875 0.877 0.451 0.875 0.245 0.01 0.570 0.325 0.732 0.325 0.732 0.732 0.325 0.394 0.005 0.689 0.689 0.809 0.689 0.809 0.689 0.331 0.809 TABLE 7.11 Backtest Results: Nine Days Violation Ratio q Group 1 Group 2 Group 3 Group 4 Group 5 Group 6 Group 7 Group 8 Group 9 0.05 0.036 0.045 0.043 0.033 0.048 0.057 0.052 0.052 0.040 0.025 0.014 0.024 0.029 0.021 0.024 0.026 0.031 0.024 0.017 0.01 0.007 0.012 0.010 0.010 0.007 0.014 0.017 0.014 0.010 p-Value 0.005 0.005 0.005 0.005 0.002 0.005 0.010 0.014 0.010 0.002 0.01 0.535 0.703 0.921 0.921 0.535 0.407 0.210 0.407 0.921 0.005 0.944 0.944 0.944 0.396 0.944 0.243 0.028 0.243 0.396 195 7.5 Backtesting TABLE 7.12 Backtest Results: 10 Days Violation Ratio q Group 1 Group 2 Group 3 Group 4 Group 5 Group 6 Group 7 Group 8 Group 9 Group 10 0.05 0.040 0.048 0.048 0.040 0.034 0.032 0.040 0.048 0.042 0.053 0.025 0.016 0.026 0.024 0.019 0.016 0.016 0.016 0.026 0.026 0.032 0.01 0.011 0.016 0.021 0.013 0.005 0.013 0.008 0.011 0.016 0.011 p-Value 0.005 0.008 0.008 0.016 0.008 0.005 0.008 0.008 0.005 0.008 0.005 0.05 0.340 0.831 0.831 0.340 0.141 0.082 0.340 0.831 0.483 0.797 0.025 0.224 0.858 0.881 0.398 0.224 0.224 0.224 0.858 0.858 0.420 0.01 0.910 0.290 0.058 0.548 0.312 0.548 0.676 0.910 0.290 0.910 0.005 0.456 0.456 0.017 0.456 0.937 0.456 0.456 0.937 0.456 0.937 TABLE 7.13 Backtest Results: 15 Days Violation Ratio q Group 1 Group 2 Group 3 Group 4 Group 5 Group 6 Group 7 Group 8 Group 9 Group 10 Group 11 Group 12 Group 13 Group 14 Group 15 0.05 0.046 0.049 0.044 0.058 0.054 0.054 0.054 0.060 0.056 0.056 0.061 0.065 0.056 0.058 0.058 0.025 0.036 0.034 0.029 0.031 0.027 0.031 0.041 0.034 0.037 0.029 0.041 0.032 0.032 0.031 0.029 0.01 0.015 0.014 0.020 0.012 0.015 0.017 0.020 0.019 0.017 0.015 0.014 0.014 0.014 0.014 0.015 p-Value 0.005 0.010 0.010 0.010 0.007 0.012 0.012 0.007 0.007 0.012 0.012 0.009 0.007 0.009 0.012 0.010 0.05 0.645 0.940 0.512 0.395 0.627 0.627 0.627 0.303 0.504 0.504 0.227 0.119 0.504 0.395 0.395 0.025 0.118 0.184 0.553 0.399 0.735 0.399 0.024 0.184 0.072 0.553 0.024 0.277 0.277 0.399 0.553 0.01 0.230 0.405 0.026 0.652 0.230 0.121 0.026 0.058 0.121 0.230 0.405 0.405 0.405 0.405 0.230 0.005 0.117 0.117 0.117 0.557 0.044 0.044 0.557 0.557 0.044 0.044 0.274 0.557 0.274 0.044 0.117 Still, the high–low frequency method does have its limits.

 

pages: 504 words: 139,137

Efficiently Inefficient: How Smart Money Invests and Market Prices Are Determined by Lasse Heje Pedersen

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

algorithmic trading, Andrei Shleifer, asset allocation, backtesting, bank run, banking crisis, barriers to entry, Black-Scholes formula, Brownian motion, capital asset pricing model, commodity trading advisor, conceptual framework, corporate governance, credit crunch, Credit Default Swap, currency peg, David Ricardo: comparative advantage, discounted cash flows, diversification, diversified portfolio, Emanuel Derman, equity premium, Eugene Fama: efficient market hypothesis, Flash crash, floating exchange rates, frictionless, frictionless market, Gordon Gekko, implied volatility, index arbitrage, index fund, interest rate swap, late capitalism, law of one price, Long Term Capital Management, margin call, market clearing, market design, market friction, merger arbitrage, mortgage debt, New Journalism, paper trading, passive investing, price discovery process, price stability, purchasing power parity, quantitative easing, quantitative trading / quantitative finance, random walk, Renaissance Technologies, Richard Thaler, risk-adjusted returns, risk/return, Robert Shiller, Robert Shiller, shareholder value, Sharpe ratio, short selling, statistical arbitrage, statistical model, systematic trading, technology bubble, total factor productivity, transaction costs, value at risk, Vanguard fund, yield curve, zero-coupon bond

You should always keep in mind that the goal is to find a strategy that works in the future and not to have the best possible backtest. You should strive for a robust process that works even if you adjust it a little. Adjusting Backtests for Trading Costs Transaction costs reduce the returns of a trading strategy. A backtest is therefore much more realistic if it accounts for transaction costs. To adjust a backtest, we first need to have an estimate of the expected transaction costs for all securities and trading sizes. You can often obtain such estimates from brokers, or you can estimate the expected transaction costs, as discussed in section 5.3. Given these expected transaction costs, we can adjust the backtest in the following simple way. Each time a trade takes place in our backtest, we compute the expected transaction cost and subtract this cost from the backtest returns. For instance, if we have a monthly portfolio rebalance rule, then each month of the backtest, we do the following: • Compute the return on the portfolio, • Compute the new security positions and the implied trades, • Compute the expected trading costs for every security and add them up, and • Subtract the total expected trading cost from the portfolio return.

In summary, you could find trading strategies by getting an edge in trading and financing illiquid securities or by trading against demand pressures. 3.3. HOW TO BACKTEST A TRADING STRATEGY Once you have a trading idea, backtesting it can be a powerful tool. To backtest a trading strategy means to simulate how it would have done historically. Of course, historical performance does not necessarily predict future performance, but a backtest is very useful nevertheless. For instance, many trading ideas are simply born bad, and this can be discovered through a backtest. For instance, suppose you have a trading idea, simulate how it would have performed over the past 20 years, and find that the strategy would never have worked in the past. Would you want to know this before you start trading? Surely, yes. Knowing this, you would be unlikely to put the trade on, and not doing so could save you a lot of money. A backtest can teach you about the risk of a strategy, and it can give you ideas about how to improve it.

Furthermore, some version will have worked the best in the past, perhaps just by chance, but, if this is by chance, it probably will not work well in the future, when you are actually trading on it. Or you tried the backtest because you heard someone made money on this trade, but, in this case, the backtest is biased to look good (your friend already told you!), even if this is by pure chance. These unavoidable biases mean that we should discount backtest returns and place more weight on realized returns. Furthermore, we should discount backtests more if they have more inputs and have been tweaked or optimized more. While unavoidable biases should simply affect how we should regard backtests, there are many avoidable biases that experienced traders and researchers fight hard to eliminate. For one, it is important to have an unbiased universe of securities.

 

Evidence-Based Technical Analysis: Applying the Scientific Method and Statistical Inference to Trading Signals by David Aronson

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, Andrew Wiles, asset allocation, availability heuristic, backtesting, Black Swan, capital asset pricing model, cognitive dissonance, compound rate of return, Daniel Kahneman / Amos Tversky, distributed generation, Elliott wave, en.wikipedia.org, feminist movement, hindsight bias, index fund, invention of the telescope, invisible hand, Long Term Capital Management, meta-analysis, p-value, pattern recognition, Ponzi scheme, price anchoring, price stability, quantitative trading / quantitative finance, random walk, retrograde motion, revision control, risk tolerance, risk-adjusted returns, riskless arbitrage, Robert Shiller, Robert Shiller, Sharpe ratio, short selling, statistical model, systematic trading, the scientific method, transfer pricing, unbiased observer, yield curve

This is so because the performance of a rule can be profoundly affected by factors that have nothing to do with its predictive power. The Conjoint Effect of Position Bias and Market Trend on Back-Test Performance In reality, a rule’s back-tested performance is comprised of two independent components. One component is attributable to the rule’s predictive power, if it has any. This is the component of interest. The second, and unwanted, component of performance is the result of two factors that have nothing to do with the rule’s predictive power: (1) the rule’s long/short position bias, and (2) the market’s net trend during the back-test period. This undesirable component of performance can dramatically influence 24 METHODOLOGICAL, PSYCHOLOGICAL, PHILOSOPHICAL, STATISTICAL FOUNDATIONS back-test results and make rule evaluation difficult. It can cause a rule with no predictive power to generate a positive average return or it can cause a rule with genuine predictive power to produce a negative average return.

This is illustrated in Figure 6.4. In data mining, the back-test performance statistic plays a very different role than it does in single-rule back testing. In data mining, back- Cumulative Gains $ Expected Performance Observed Performance Back-Test +/Random Variation In-Sample Future Time FIGURE 6.4 Expected performance for single rule back test. 271 Data-Mining Bias: The Fool’s Gold of Objective TA tested performance serves as a selection criterion. That is to say, it is used to identify the best rule. The mean returns of all back-tested rules are compared and the one with the highest return is selected. This, too, is a perfectly legitimate use of the back test (observed) performance statistic. It is legitimate in the sense that the rule with the highest back-tested mean return is in fact the rule that is most likely to perform the best in the future.

Simply by knowing their historical position bias, 90 percent long for rule 1 and 60 percent for rule 2, and knowing the market’s average daily return over the back-test period, we would be able to compute the expected returns for rules with no predictive power having these position biases using the equation for the expected return already shown. The expected returns for each rule and would then be subtracted from each rule’s observed performance. Therefore, from rule 1’s backtested return, which was 7.31 percent, we would subtract 7.31 percent, giving a result of zero. The result properly reflects rule 1’s lack of predictive power. From rule 2’s return of 1.78 percent, we would subtract a value of 1.78 percent, also giving a value of zero, also revealing its lack of predictive power. The bottom line is this: by adjusting the back-tested (observed) performance by the expected return of a rule with no predictive power having an equivalent position bias, the deceptive component of performance can be removed.

 

pages: 354 words: 26,550

High-Frequency Trading: A Practical Guide to Algorithmic Strategies and Trading Systems by Irene Aldridge

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

algorithmic trading, asset allocation, automated trading system, backtesting, Black Swan, Brownian motion, business process, capital asset pricing model, centralized clearinghouse, collateralized debt obligation, collective bargaining, diversification, equity premium, financial intermediation, high net worth, implied volatility, index arbitrage, interest rate swap, inventory management, law of one price, Long Term Capital Management, Louis Bachelier, margin call, market friction, market microstructure, martingale, New Journalism, p-value, paper trading, performance metric, profit motive, purchasing power parity, quantitative trading / quantitative finance, random walk, Renaissance Technologies, risk tolerance, risk-adjusted returns, risk/return, Sharpe ratio, short selling, Small Order Execution System, statistical arbitrage, statistical model, stochastic process, stochastic volatility, systematic trading, trade route, transaction costs, value at risk, yield curve

The same code should be used in both, and the back-testing engine should run on tick-by-tick data to reenact past market conditions. The main functionality code from the back-testing modules should then be reused in the live system. To ensure statistically significant inferences, the model “training” period T should be sufficiently large; according to the central limit theorem (CLT), 30 observations is the bare minimum for any statistical significance, and 200 observations is considered a reasonable number. Given strong seasonality in intra-day data (recurrent price and volatility changes at specific times throughout the day), benchmark high-frequency models are backtested on several years of tick-by-tick data. The main difference between the live trading model and the back-test model should be the origin of the quote data; the back-test system includes a historical quote-streaming module that reads historical tick data from archives and feeds it sequentially to the module that has the main functionality.

Aldridge (2009a) develops a quantitative methodology of applying hit and miss ratio analyses to enhance the accuracy of predictions of trading models. CONCLUSION Various back-test procedures illuminate different aspects of strategy performance on historical data and are performed before the trading strategy is applied to live capital. Observing parameters of strategy performance in back tests allows high-frequency managers to identify the best strategies to include in their portfolio. The same parameters allow modelers to tweak their strategies to obtain even more robust models. Care should be taken to avoid “overfitting”—using the same data sample in repeated testing of the model. CHAPTER 16 Implementing High-Frequency Trading Systems nce high-frequency trading models have been identified, the models are back-tested to ensure their viability. The back-testing software should be a “paper”-based prototype of the eventual live system.

The main difference between the live trading model and the back-test model should be the origin of the quote data; the back-test system includes a historical quote-streaming module that reads historical tick data from archives and feeds it sequentially to the module that has the main functionality. In the live trading system, a different quote module receives real-time tick data originating at the broker-dealers. Except for differences in receiving quotes, both live and back-test systems should be identical; they can be built simultaneously and, ideally, can use the same code samples for core functionality. This chapter reviews O 233 234 HIGH-FREQUENCY TRADING the systems implementation process under the assumption that both backtesting and live engines are built and tested in parallel. MODEL DEVELOPMENT LIFE CYCLE High-frequency trading systems, by their nature, require rapid hesitationfree decision making and execution.

 

pages: 263 words: 75,455

Quantitative Value: A Practitioner's Guide to Automating Intelligent Investment and Eliminating Behavioral Errors by Wesley R. Gray, Tobias E. Carlisle

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, Andrei Shleifer, asset allocation, Atul Gawande, backtesting, Black Swan, capital asset pricing model, Checklist Manifesto, cognitive bias, compound rate of return, corporate governance, correlation coefficient, credit crunch, Daniel Kahneman / Amos Tversky, discounted cash flows, Eugene Fama: efficient market hypothesis, forensic accounting, hindsight bias, Louis Bachelier, p-value, passive investing, performance metric, quantitative hedge fund, random walk, Richard Thaler, risk-adjusted returns, Robert Shiller, Robert Shiller, shareholder value, Sharpe ratio, short selling, statistical model, systematic trading, transaction costs

If we test a strategy that rebalances annually on January 1, we introduce look-ahead bias if we use the preceding year's annual results because they would not have been available on January 1 of this year. Companies often restate financial statements after the fact, and this can introduce another form of look-ahead bias that can have a huge impact on back-tested results. Marcus Bogue and Morris Bailey in their white paper, “The Advantages of Using as First Reported Data with Current Compustat Data for Historical Research,”16 highlight how restated financial statements impact back-test results for a simple price-to earnings ratio strategy. If the back-test fails to account for the difference in financial statement data as the data are first reported and then as they are subsequently restated, the back-test results vary dramatically. For example, from June 1987 through June 2001, failing to account for look-ahead bias caused by restatement of financial results led to an overstatement of returns achievable with the price-to-earnings ratio strategy by an incredible 28 percent.

In this chapter, we discuss our philosophy for conducting investment simulations, and survey the potential pitfalls in interpreting back-test results. We cast a suspicious eye on back-tested, and real, historical results, closely scrutinizing the steps we can take to ensure that results are genuine, and replicable. In Chapter 11, we study the best way to combine the research we've already considered into a cohesive strategy. We examine the Magic Formula and the F_SCORE to see if we can find a better structure for our valuation model. Our process leads us to identify some potential structural issues with the Magic Formula. In Chapter 12, the final chapter, we back-test the quantitative value model we created in Chapter 11. We take a comprehensive look at its raw results and its risk- and opportunity-cost-adjusted performance.

Part Five sets out a variety of signals sent by other market participants. There we look at the impact of buybacks, insider purchases, short selling, and buying and selling from institutional investment managers like activists and other fund managers. Finally, in Part Six we build and test our quantitative value model. We study the best way to combine the research we've considered into a cohesive strategy, and then back-test the resulting quantitative value model. CHAPTER 1 The Paradox of Dumb Money “As they say in poker, ‘If you've been in the game 30 minutes and you don't know who the patsy is, you're the patsy.'” —Warren Buffett (1987) In the summer of 1968, Ed Thorp, a young math professor at the University of California, Irvine (UCI), and author of Beat the Market: A Scientific Stock Market System (1967), accepted an invitation to spend the afternoon playing bridge with Warren Buffett, the not-yet-famous “value” investor.

 

pages: 327 words: 91,351

Traders at Work: How the World's Most Successful Traders Make Their Living in the Markets by Tim Bourquin, Nicholas Mango

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

algorithmic trading, automated trading system, backtesting, commodity trading advisor, Credit Default Swap, Elliott wave, Long Term Capital Management, paper trading, pattern recognition, prediction markets, risk tolerance, Small Order Execution System, statistical arbitrage, The Wisdom of Crowds, transaction costs

With regards to breakout strategies, when a market breaks out of a range—and it could be a five-day range or a five-hundred-day range—you enter your position in the direction of that breakout, and you fine-tune your stop based on your testing. Bourquin: Are you constantly backtesting markets to find where certain trends work best? Or have you backtested trades in the past so that you now know what approach works best in a given market and can apply it going forward? German: Initially, trend following involves a lot of backtesting and thousands and thousands of tests, including millions of iterations on all kinds of different markets, with all kinds of different trend-following strategies and approaches to stops and profit targets. There is an initial period where you are backtesting for ten hours a day, but then you get into a rhythm where you determine what works, based on your backtesting. I did a great deal of backtesting over a couple of years, which solidified the markets that I wanted to trade, the program that I wanted to follow, and what does and does not work for me.

Now, every time we experience a drawdown or every time I want to question or test myself, I will do some backtesting. But at the end of the day, I always go back to my original set of tests, and that is what I have been trading off of for years. Bourquin: Have you had to change your strategy at all? It sounds like you came across trend-following strategies that work for you, and they have ­continued to do well for years. Do you think that’s the case with most ­backtested strategies? German: Whether or not a backtested strategy does well over the long term depends on the way in which that particular strategy was backtested. A backtested strategy might look great on paper but not make any money in real life. That said, I will not trade anything that hasn’t been tested. Maybe I’m just wired this way now, because I have been backtesting strategies for so long, and it’s kind of engrained in me, but when somebody says they are doing this or that in the market, I always ask them, “How do you know your strategy works?

As a trade moves higher, the stop trails behind it, and there are dozens of different trailing stops you can use. That’s what I use to get out of a profitable trade. Bourquin: Can I ask what software you use in your trend following and your backtesting? German: I use a bunch of different software. I don’t want to go through all the different software that I use, but I can say that there are several inexpensive options for doing basic backtesting. It gets tricky, however, when you start to think about the cleanliness of your data and how to fuse different contract months and rollover periods for longer-term backtesting. That said, when I started, TradeStation was the easiest to learn. That’s really all you need to get started. Bourquin: Once you’re into a green trade and the trend continues to rise, do you allow for scaling in more or building up size in an existing trade?

 

pages: 483 words: 141,836

Red-Blooded Risk: The Secret History of Wall Street by Aaron Brown, Eric Kim

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, algorithmic trading, Asian financial crisis, Atul Gawande, backtesting, Basel III, Benoit Mandelbrot, Bernie Madoff, Bernie Madoff, Black Swan, capital asset pricing model, central bank independence, Checklist Manifesto, corporate governance, credit crunch, Credit Default Swap, disintermediation, distributed generation, diversification, diversified portfolio, Emanuel Derman, Eugene Fama: efficient market hypothesis, experimental subject, financial innovation, illegal immigration, implied volatility, index fund, Long Term Capital Management, loss aversion, margin call, market clearing, market fundamentalism, market microstructure, money: store of value / unit of account / medium of exchange, moral hazard, natural language processing, open economy, pre–internet, quantitative trading / quantitative finance, random walk, Richard Thaler, risk tolerance, risk-adjusted returns, risk/return, road to serfdom, Robert Shiller, Robert Shiller, shareholder value, Sharpe ratio, special drawing rights, statistical arbitrage, stochastic volatility, too big to fail, transaction costs, value at risk, yield curve

Since a constant VaR estimate gives no information at all, and your estimate is worse than that, it’s clear why historical simulation VaR is not a VaR. Nevertheless, historical simulation VaR has become the most popular VaR number for risk reporting and regulatory purposes. Why? Because it’s easy to compute and its objective. It never surprises you; in fact, it’s always pretty close to yesterday’s value. The fact that it can’t pass a back-test doesn’t matter to people who never look at back-tests. The fact that it is actively misleading, telling you it’s safe when it’s dangerous and telling you it’s dangerous when it’s safe, doesn’t matter to people who only report and regulate. That only matters to people who manage risk. One fix that might occur to you is to set the VaR halfway between the fifth and sixth worst losses instead of between the 10th and 11th worst.

There are many problems with this approach, the biggest theoretical one is the average prediction made assuming an average value is exactly right, is exactly wrong. The biggest practical one is it never back-tests well. Moreover, variance-covariance VaR tells you even less about the tails than other VaRs. However, it was the form in which JPMorgan introduced VaR to the world. Some people still think of it as the definition of VaR. JPMorgan needed it to produce a report within 15 minutes of market close, using 1990-era technology and data systems. Variance-covariance was the only practical option. For Basel II, however, many flavors of VaR were easily available and we should have insisted on one that can pass back-test. But the most momentous decision, which seemed innocuous at the time, was to promise banks capital relief for spending all the money to create Basel II systems.

The statistical properties of market price changes, even in normal times, are erratic, evolving rapidly and unpredictably. We ended up stealing methods from the people who set sports betting point spreads, and adding stuff we made up as we went along. We had to delve deeply into the back office and study methods developed by controllers and auditors. Only after years of intensive, cooperative work did we develop VaRs that could pass rigorous statistical back-tests, and on which we were willing to bet with traders. The only way you got VaR accepted on the trading floor in the early 1990s was to bet; you can imagine what traders think of a risk manager who tells them how to run their billion-dollar portfolios but won’t risk $10,000 of his own money on his analysis. One major result is we learned how little we had understood about the risk in the well-behaved center of the probability distribution on normal trading days.

 

pages: 120 words: 39,637

The Little Book That Still Beats the Market by Joel Greenblatt

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

backtesting, index fund, random walk, transaction costs

Another question often asked during the last five years is whether the magic formula would work outside the United States. After I wrote the original edition, a number of Wall Street firms did conduct some research into this question (showing that the formula worked in pretty much all foreign markets tested), but we did not conduct any of our own backtests, for two reasons. First, much of the available historical stock market data from outside the United States is seriously flawed, and backtest results would not be reliable. It is helpful to know, however, that most historical studies over the last several decades involving classic (and less problematic to test) value characteristics, such as low price to earnings, low price to book value, and low price to sales have proved equally effective in both the United States and international markets.

Despite all its flaws, the formula certainly seems to have worked well over the long-term (fortunately, I received many nice e-mails about this, too). But over the last 10 years, the results from our test of roughly the largest 1,000 companies in the United States (with market capitalizations over $1 billion) tell an interesting tale. This is one of those rare 10-year periods over which the S&P 500 index was actually down. According to our backtests, on the other hand, the formula managed to earn 255 percent during this same period (more than tripling our money!). That’s a 13.5 percent annualized return during a 10-year period when the S&P index was actually down 0.9 percent per year. TABLE A.1 Updated Magic Formula Results Through 2009 But here’s the thing. Even during this great 10-year period of outperformance by the formula, investors would still have had to suffer through plenty of poor performance.

Long term, then, being uncooperative over the short term is likely a good characteristic. It is not easy to find an effective short-term hedging strategy for our favorite magic formula stocks. As a result, most of the benefits of the formula will continue to go to the much smaller group of investors who can maintain a true long-term horizon. One additional characteristic of the magic formula strategy is not necessarily good or bad. However, based on our updated backtests, it’s probably helpful to keep this one in mind. Over the last 22 years, when comparing the performance of the magic formula portfolios during up months for the S&P 500 and down months for the same index, it turns out that much of the outperformance of our portfolios comes during the up months. On average during this 22-year period, the magic formula portfolios “captured” 95 percent of the S&P 500’s performance during down months and 140 percent of its performance during up months.

 

pages: 447 words: 104,258

Mathematics of the Financial Markets: Financial Instruments and Derivatives Modelling, Valuation and Risk Issues by Alain Ruttiens

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

algorithmic trading, asset allocation, backtesting, banking crisis, Black Swan, Black-Scholes formula, Brownian motion, capital asset pricing model, collateralized debt obligation, correlation coefficient, Credit Default Swap, credit default swaps / collateralized debt obligations, delta neutral, discounted cash flows, discrete time, diversification, implied volatility, interest rate derivative, interest rate swap, margin call, market microstructure, martingale, p-value, passive investing, quantitative trading / quantitative finance, random walk, risk/return, Sharpe ratio, short selling, statistical model, stochastic process, stochastic volatility, transaction costs, value at risk, volatility smile, Wiener process, yield curve, zero-coupon bond

If the fund strategy is stable enough over time, the VaR calculation can be more accurate than through the traded instruments, first by avoiding the correlations problem, and second, if the composition of the fund portfolio is often modified. Backtesting of the VaR VaR methods presenting several weaknesses – starting with the adequate selection among several VaR methodologies – a VaR estimate needs to be tested a posteriori (“backtested”), to check to what extent it fits with actually observed losses larger than the VaR amount. As said by A. Brown11, “VaR is only as good as its backtest. When someone shows me a VaR number, I don't ask how it is computed. I ask to see the backtest”. The simple way to check it consists in counting the number N of times a portfolio presents losses that exceed the VaR number on a series of n successive VaR calculations. Depending on N/n, – called the “failure rate” – being higher or lower than the confidence level c associated with the VaR measures, the used VaR model is over- or underestimating the risk (the equality between N/n and c being obviously very unlikely).

Depending on N/n, – called the “failure rate” – being higher or lower than the confidence level c associated with the VaR measures, the used VaR model is over- or underestimating the risk (the equality between N/n and c being obviously very unlikely). The most popular backtest is the Kupiec's one, also called “POF (for Proportion of failures) test”. In this test, the losses exceeding the VaR number are considered to be independently and identically distributed, so that N follows a binomial distribution f(N) (that a loss may exceed or not the VaR number). For a confidence level c, the corresponding frequency of losses p is, repeating Eq. 14.5, (14.5) so that the failure rate N/n could be used as an unbiased measure of p, that would converge to 1 − c with n growing. f(N) is therefore described by the binomial distribution where CnN denotes the number of possible combinations of N failures and (1 − N) “non-failures” on a total of n events.

., 2006, John Wiley & Sons, Ltd, Chichester. 8 To make a more precise calculation, the width of the bins should be narrower than 0.5%, as used here. 9 The 2510 returns used for the example present a kurtosis of 7.81 and a skewness of −0.10. 10 In the initial basic example, the only risk factor was the price change of the exposure in S&P 500. 11 A. BROWN, Private Profits and Socialized Risk – Counterpoint: Capital Inadequacy, Global Association of Risk Professionals, June/July 08 issue. Cited by O. NIEPPOLA in his masters Thesis Backtesting Value-At-Risk Models, Helsinki School of Economics, 2008. 12 See any book of statistics. This ratio is a log ratio of the likelihood that p = , divided by the likelihood that p is not = . To verify this, one must use the values of a χ2 distribution, if p is asymptotically Gaussian, which is the case of a binomial distribution. 13 See Peter F. CHRISTOFFERSEN, Evaluating Interval Forecasts, International Economic Review, vol. 39, no. 4, November 1998. 14 See for example, L.

 

pages: 464 words: 117,495

The New Trading for a Living: Psychology, Discipline, Trading Tools and Systems, Risk Control, Trade Management by Alexander Elder

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

additive manufacturing, Atul Gawande, backtesting, Benoit Mandelbrot, Checklist Manifesto, deliberate practice, diversification, Elliott wave, endowment effect, loss aversion, mandelbrot fractal, margin call, offshore financial centre, paper trading, Ponzi scheme, price stability, psychological pricing, quantitative easing, random walk, risk tolerance, short selling, South Sea Bubble, systematic trading, The Wisdom of Crowds, transaction costs, transfer pricing, traveling salesman, tulip mania

I wrote this book to help both types of traders. ■ 38. System Testing, Paper Trading, and the Three Key Demands for Every Trade Before trading real money with a system, you need to test it, whether you developed it yourself or bought it from a vendor. This can be done in one of two ways. One is backtesting: apply your system's rules to a stretch of historical data, usually several years' worth. The other is forward-testing: trade small positions with real money. Serious traders begin with backtesting, and if its results look good, switch to forward-testing; if that works well, they gradually increase position size. Looking at printouts of historical results is a nice start, but don't let good numbers lull you into a false sense of security. The profit-loss ratio, the longest winning and losing streaks, the maximum drawdown, and other parameters may appear objective, but past results don't guarantee the system will hold up in the real world of trading.

You grit your teeth and put on another trade. Another loss. Your drawdown is deepening, and then the system flashes a new signal. Will you put on the next trade? Suddenly, an impressive printout looks like a very thin reed on which to hang the future of your account. There is a cottage industry of programmers who back-test systems for a fee. Some traders, too suspicious to disclose their “sure-fire methods,” spend months learning to use testing software. In the end, only one kind of backtesting prepares you to trade—manual testing. It is slow, time-consuming, and cannot be automated, but it's the only method that comes close to modeling real decision making. It consists of going through historical data one day at a time, scrupulously writing down your trading signals for the day ahead, and then clicking one bar forward and recording new signals and trades for the next day.

The Brain Myth Losers who suffer from the “brain myth” will tell you, “I lost because I didn't know trading secrets.” Many have a fantasy that successful traders have some secret knowledge. That fantasy helps support a lively market in advisory services and ready-made trading systems. A demoralized trader may whip out his credit card to buy access to “trading secrets.” He may send money to a charlatan for a $3,000 “can't miss,” backtested, computerized trading system. When that system self-destructs, he'll pull out his almost-maxed-out credit card again for a “scientific manual” that explains how he can stop losing and begin winning by contemplating the moon, the stars, or even Uranus. At an investment club we used to have in New York, I often ran into a famous financial astrologer. He often asked for free admission because he couldn't afford to pay a modest fee for the meeting and a meal.

 

The Intelligent Asset Allocator: How to Build Your Portfolio to Maximize Returns and Minimize Risk by William J. Bernstein

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

asset allocation, backtesting, capital asset pricing model, computer age, correlation coefficient, diversification, diversified portfolio, Eugene Fama: efficient market hypothesis, index arbitrage, index fund, Long Term Capital Management, p-value, passive investing, prediction markets, random walk, Richard Thaler, risk tolerance, risk-adjusted returns, risk/return, South Sea Bubble, the scientific method, transaction costs, Vanguard fund, zero-coupon bond

For example, a “simpleton’s portfolio” consisting of one quarter each U.S. large stocks, U.S. small stocks, foreign stocks, and U.S. high-quality bonds had a higher return, with much lower risk, than large U.S. stocks alone (represented by the S&P 500 index). The S&P 500, in turn, performed better than 75% of professional money managers over the same period. I was fascinated by the T. Rowe Price data; here was a simple tool for ascertaining historical asset allocation performance—collect data on the prior performance of various asset classes, and “backtest” returns and risks. To my disappointment, I could find no readily available software which accomplished this; I would have to write my own spreadsheet files. I began to buy, beg, steal or borrow data on a wide variety of assets over several different historical epochs and build portfolio models going back as far as 1926. The calculations performed by T. Rowe Price and myself contained an important implicit assumption: that the portfolios were vii viii Preface “rebalanced” periodically.

Next assume that you can tolerate only 10% SD of risk. Clearly, at this level the use of 5-year notes is superior to the other two bond choices; over most of its extent it lies above the other two curves, indicating that for each degree of risk the 5-year notes and stock mix yields more return. Only at low risk levels is the use of T-bills desirable. Portfolio simulations with other databases using both backtesting and another technique called mean-variance Figure 4-2. Stock/bond mixes, 1926–1998. The Behavior of Real-World Portfolios 45 analysis also suggest the superiority of short-term bonds. On occasion it may be advantageous to use long-term bonds or T-bills in small amounts. In general, however, you will not go far wrong by sticking to bond maturities of six months to five years for the risk-diluting portion of your portfolio.

And, the reversal in fortunes in the foreign-versus-domestic pony race of the past 20 years may turn out to be equally anomalous. Who knows whether foreign or domestic stocks will have the higher return The Behavior of Real-World Portfolios 53 over the next 20, 30, or even 50 years? However, it seems highly likely that a 50/50 mix will not be too far from the best foreignversus-domestic allocation. The real purpose of portfolio backtesting, mean-variance analysis, or any other kind of portfolio analysis is not to find the “best” asset mix. Rather, it is to find a portfolio mix that will not be too far off the mark under a wide variety of circumstances. Small Stocks versus Large Stocks It’s important to realize how large and small stocks behave relative to each other. Until recently it was generally accepted that small stocks had higher returns than large stocks.

 

pages: 467 words: 154,960

Trend Following: How Great Traders Make Millions in Up or Down Markets by Michael W. Covel

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, asset allocation, Atul Gawande, backtesting, Bernie Madoff, Black Swan, capital asset pricing model, Clayton Christensen, commodity trading advisor, correlation coefficient, Daniel Kahneman / Amos Tversky, delayed gratification, deliberate practice, diversification, diversified portfolio, Elliott wave, Emanuel Derman, Eugene Fama: efficient market hypothesis, fiat currency, game design, hindsight bias, housing crisis, index fund, John Nash: game theory, linear programming, Long Term Capital Management, mandelbrot fractal, margin call, market bubble, market fundamentalism, market microstructure, Nash equilibrium, new economy, Nick Leeson, Ponzi scheme, prediction markets, random walk, Renaissance Technologies, risk tolerance, risk-adjusted returns, risk/return, Robert Shiller, Robert Shiller, shareholder value, Sharpe ratio, short selling, South Sea Bubble, Stephen Hawking, systematic trading, the scientific method, Thomas L Friedman, too big to fail, transaction costs, upwardly mobile, value at risk, Vanguard fund, volatility arbitrage, William of Occam

Edward de Bono This page intentionally left blank Trading System Example from Mechanica F “Part of back-testing is to determine position sizing and risk management strategies that fit within your drawdown tolerance envelope.” —Ed Seykota1 In this appendix, Bob Spear shows how a trader might construct a simple, mechanical trend following system on Trading Recipes Portfolio Engineering Software. His newest software, surpassing Trading Recipes, is called Mechanica (www.mechanicasoftware. com). For this example we start with a broad look at the system’s trading ideas, which echo many of the ideas discussed in this book. We construct a hypothetical portfolio and run a backtest up to a certain point in time. Then, we examine in detail how the software enters, sizes, and manages a trade. Afterwards, we run our backtest to the end of our data and examine the results without and with money management.

Risk management is to direct and control the possibility of loss. The activities of a risk manager are to measure risk and to increase and decrease risk by buying and selling stock. In general, good risk management combines several elements: 1. Clarifying trading and risk management systems until they can translate to computer code. 2. Inclusion of diversification and instrument selection into the back-testing process. 3. Back-testing and stress-testing to determine trading parameter sensitivity and optimal values. 4. Clear agreement of all parties on expectation of volatility and return. 5. Maintenance of supportive relationships between investors and managers. 6. Above all, stick to the system. 7. See #6, above. As you navigate this chapter, keep in mind Seykota’s wisdom. Chapter 10 • Trading Systems Five Questions for a Trading System Answer the following five questions and you have the core components of a trend following trading system and you are on your way to having your edge: 1.

He was one of the only people at the time who was doing simulation of any kind. He was generous with his ideas, making a point to share what he knew; it delighted him to get others to try systems. He inspired a great many people and spawned a whole generation of traders, providing courage and a road map. Ed Seykota97 We started our database using punch cards in 1968, and we collected commodity price data back to July 1959. We back-tested the 5 and 20 and the weekly rules for Dick. I think the weekly method was the best thing that anyone had ever done. Of all Dick’s contributions, the weekly rules helped identify the trend and helped you act on it. Dick is one of those people who today likes to beat the computer—only he did it by hand. He enjoyed the academics of the process, the excitement of exploring new ideas and running the numbers.

 

pages: 320 words: 33,385

Market Risk Analysis, Quantitative Methods in Finance by Carol Alexander

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

asset allocation, backtesting, barriers to entry, Brownian motion, capital asset pricing model, constrained optimization, credit crunch, Credit Default Swap, discounted cash flows, discrete time, diversification, diversified portfolio, en.wikipedia.org, implied volatility, interest rate swap, market friction, market microstructure, p-value, performance metric, quantitative trading / quantitative finance, random walk, risk tolerance, risk-adjusted returns, risk/return, Sharpe ratio, statistical arbitrage, statistical model, stochastic process, stochastic volatility, transaction costs, value at risk, volatility smile, Wiener process, yield curve

For instance, in Section II.5.5.3 we describe a pairs trade between the volatility index futures that have recently started trading on the CBOE and Eurex. Whenever a regression model is used to develop a trading strategy it is very important to backtest the model. Backtesting – which is termed out-of -sample testing or post-sample prediction by academics – is particularly important when considerable sums of money are placed on the output of a regression model. It is absolutely imperative to put the model through the rigorous testing procedure using a long period of historical data. A simple backtest proceeds as follows:28 1. Estimate the regression model on an historical sample of data on the variables, saving subsequent historical data for testing the model. 2. Make an investment that is determined by the estimated model.

What are the characteristics of P&L that we desire? For trading strategies we may look for strategies that produce a high Sharpe ratio, or that maximize some other risk adjusted performance measure, as outlined in Section I.6.5. But for a pure hedging strategy we should seek a hedge ratio that reduces risk alone, for instance one that minimizes the variance of the P&L. 28 This is an example of the general backtests described in Section II.8.5. 184 Quantitative Methods in Finance I.4.7 SUMMARY AND CONCLUSIONS This chapter has laid the foundations of regression analysis in the simple framework of linear models. We focus on the ordinary least squares (OLS) estimation criterion since this is optimal under fairly general circumstances when we have large samples of data, as is often the case in market risk analysis.

Statistical Tables 274 Statistical Tables Statistical Tables 275 276 Statistical Tables Statistical Tables 277 Index Abnormal return, CAPM 253 Absolute return 58 Absolute risk tolerance 231 Absolute value function 6 Active return 92, 256 Active risk 256 Alternative hypothesis 124, 151 American option 1, 215–16 Amex case study 144–6, 153–5 Amex Oil index 162–3, 169–70, 174 Analysis of variance (ANOVA) Amex case study 154 BHP Billiton Ltd case study 164–5 matrix notation 159–60 regression 143–4, 149–50 Analytic solution 185 Anderson–Darling test 128–9 ANOVA (analysis of variance) Amex case study 154 BHP Billiton Ltd case study 164–5 matrix notation 159–60 regression 143–4, 149–50 Appraisal ratio 257 Approximate confidence interval 122 Approximations delta–gamma 2, 34 delta–gamma–vega 34 duration–convexity 2–3, 34 finite difference 206–10, 223 Taylor expansion 31–4, 36 Arbitrage no arbitrage 2, 179–80, 211–13 pricing theory 257 statistical strategy 182–3 Arithmetic Brownian motion 22, 136, 138–9 Arrival rate, Poisson distribution 87–9 Ask price 2 Asset management, global 225 Asset prices binomial theorem 85–7 lognormal distribution 213–14 pricing theory 179–80, 250–55 regression 179–80 stochastic process 137–8 Assets, tradable 1 Asymptotic mean integrated square error 107 Asymptotic properties of OLS estimators 156 Autocorrelation 175–9, 184, 259–62 Autocorrelation adjusted Sharpe ratio 259–62 Autoregression 135 Auxiliary regression 177 Backtesting 183 Backward difference operator 19 Bandwidth, kernel 106–7 Bank 225 Barra model 181 Basic calculus 1–36 Basis splines 200 Bayesian probability 72–3 Bermudan option 1 Bernoulli trial 85–6 Best fit of function to data 201 Best fit line 145 Best linear unbiased estimator (BLUE) 157, 175 280 Index Beta values CAPM 252–3 diversifiable risk 181 OLS estimation 147–8, 156, 160–1, 183–4 regression 156 Bid–ask spread 2 Bid price 2 Bilinear interpolation 193–5 Binomial distribution 85–7, 213 Binomial lattices 186, 210–16, 223 American option 215–16 European option 212–13 lognormal asset price distribution 213–14 no arbitrage 211–12 risk neutral valuation 211–12 Bisection method, iteration 187–8 Bivariate distribution 108–9, 116–17, 148 Bivariate normal distribution 116–17, 148 Bivariate normal mixture distribution 116–17 Black–Scholes–Merton pricing model asset price 137–9 European option 2, 213, 215–16 lognormal distribution 94 numerical method 185 Taylor expansion 2–3 BLUE (best linear unbiased estimator) 157, 175 Bonds 1–2, 37, 191 Bootstrap 218 Brownian motion 136 arithmetic 22, 136, 139 geometric 21–2, 134, 138, 212, 213–14, 218–19 Calculus 1–36 differentiation 10–15 equations and roots 3–9 financial returns 16–26 functions 3–9, 26–31 graphs 3–9 integration 15–16 Taylor expansion 31–4, 36 Calibration 201 Call option 1, 6, 212–13 Capital allocation, bank 225 Capital asset pricing model (CAPM) 179–80, 252–5, 257–8 Capital market line (CML) 250–2 CAPM (capital asset pricing model) 179, 252–5, 257–8 CARA (constant absolute risk aversion) 233–4 Cartesian plane/space 39 Case studies Amex 144–6, 153–5 BHP Billiton Ltd 162–5, 168–70, 174–5, 177–8 credit risk 171–3 EM algorithm 203–6 PCA of European equity index 67–9 time series of asset price 218–20 Cauchy distribution 105 CBOE Gold index 162–3, 168–70, 174 Central limit theorem 120–1 Centre of location of distribution 78–9 Certainty equivalence 227–9 Characteristic equation 51–2 Chi-squared distribution 100–1, 123–4 Cholesky decomposition 37–8, 61–3, 70 Cholesky matrix 62–3, 70, 220–2 Circulant matrix 178 Classical probability 72–3 CML (capital market line) 250–2 Coefficients OLS estimators 155 regression 143–4, 151–2, 155, 168–9 risk aversion 231–4, 237 risk tolerance 233 Cokurtosis, CAPM 255 Complement, probability 73 Complete market 212 Compounding factor, returns 22–3 Concave function 13–14, 35 Conditional confidence interval 169 Conditional distribution 108–9 Conditional mean equation, OLS 148 Conditional probability 73 Conditional value at risk 105 Confidence interval 72, 118–24, 167–70 Conjugate gradient 193 Consistent OLS estimators 156–8 Constant absolute risk aversion (CARA) 233–4 Constant relative risk aversion (CRRA) 232–4 Constant term, regression 143–4 Constrained optimization 29–31 Constraint, minimum variance portfolio 245–6 Continuous compounding, return 22–3 Continuous distribution 114 Continuous function 5–6, 35 Continuous time 134–9 long-short portfolio 21 mean reverting process 136–7 Index notation 16–17 P&L 19 random walks 136–7 stochastic process 134–9 Convergence, iteration 188–9 Convex function 13–14, 35 Copula 109–10 Correlation 111–14 beta value 147–8 simulation 220–2 Correlation matrix 38, 55–61, 70 eigenvalues/vectors 52–4, 59–61 PCA 64–5, 67–8, 70 positive definiteness 58–9 Coskewness, CAPM 255 Counting process 139 Coupon 1 Covariance 80, 110–2 Covariance matrix 37–8, 55–61, 70 eigenvalues/vectors 59–61 OLS estimation 159–60 PCA 64, 66–7, 70 positive definiteness 58–9 Cox–Ross–Rubinstein parameterization 215 Crank–Nicolson finite difference scheme 210 Credit risk case study 171–3 Criterion for convergence, iteration 188 Critical region, hypothesis test 124–5 Critical value 118–20, 122–3, 129 Cross-sectional data 144 CRRA (constant relative risk aversion) 232–4 Cubic spline 197–200 Cumulative distribution function 75 Currency option 195–7 Decision rule, hypothesis test 125 Decomposition of matrix 61–4 Definite integral 15–16 Definite matrix 37, 46–7, 54, 58–9, 70 Degree of freedom, Student t distribution Delta–gamma approximation 2–3, 34 Delta–gamma–vega approximation 34 Delta hedging 208, 211 Density function 75–7 binomial distribution 86 bivariate distribution 108–9 joint 114–15 leptokurtic 82–3 lognormal distribution 93 MLE 130–4 97–8 normal distribution 90–2, 97, 115–17 Poisson distribution 88 stable distribution 105–6 Student t distribution 97–100 Dependent variable 143 Derivatives 1–2 calculation 12–13 first 2, 10–11 partial 27–8, 35 second 2, 11, 13 total 31 Determinant 41–3, 47 Deterministic variable 75 Diagonalizable matrix 43 Diagonal matrix 40, 56 Dicky–Fuller test 136 Differentiable function 5–6, 35 Differential equations partial 2, 208–10 stochastic 134, 136 Differentiation 10–15 concave/convex function 13–14 definition 10–11 monotonic function 13–14 rule 11–12 stationary point 14–15 stochastic differential term 22 Diffusion process, Brownian motion 22 Discontinuity 5 Discrete compounding, return 22–3 Discrete time 134–9 log return 19–20 notation 16–17 P&L 19 percentage return 19–20 random walk model 135 stationary/integrated process 134–6 stochastic process 134–9 Discretization of space 209–10 Discriminant 5 Distribution function 75–7 Diversifiable risk 181 DJIA (Dow Jones Industrial Average) index 137–8 Dot product 39 Dow Jones Industrial Average (DJIA) index 137–8 Dummy variable 175 Duration–convexity approximation 2–3, 34 Durbin–Watson autocorrelation test 176–7 281 282 Index Efficiency, OLS estimators 156–7 Efficient frontier 246–9, 251 Efficient market hypothesis 179 Eigenvalues/vectors 37–8, 48–54, 70 characteristic equation 51–2 correlation matrix 52, 59–60 covariance matrix 59–61 definiteness test 54 definition 50–1 linear portfolio 59–61 matrices/transformations 48–50 properties 52–3 Elliptical distribution 115 EM (expectation–maximization algorithm) 203–6 Empirical distribution 77, 217–18 Enhanced indexation 182–3 Epanechnikov kernel 107 Equality of two mean 126–7 Equality of two variance 126–7 Equations 3–9 CML 252 heat equation 208–9 partial differential 2, 208–10 quadratic 4–5 roots 187 simultaneous equations 44–5 Equity index returns 96–7 Equity price 172 Equity volatility 172–3 Error process 145, 148, 155 ESS (explained sum of squares) 149–50, 159–62 Estimation calibration 201 MLE 72, 130–4, 141, 202–3 OLS 143–4, 146–9, 153–63, 170–1, 176 ETL (expected tail loss) 104–5 European equity indices case study 67–9 European options 1–2 American option 215–16 binomial lattice 212–13 interpolation 195–6 pricing 212–13, 215–16 Euro swap rate (1-year) 172 Excel BHP Billiton Ltd case study 163–4 binomial distribution 213 chi-squared distribution 123–4 critical values 118–20, 122–3 Goal Seek 186, 188–9 histogram 77–8 matrix algebra 40, 43–6, 53–4, 59, 63–4, 68, 70 moments of a sample 82–3 multiple regression 163–4 normal probabilities 90–1 OLS estimation 153–5 percentiles 83–5 Poisson distribution 88 random numbers 89 simulation 217, 219 Solver 186, 190–1, 246 Student t distribution 100, 122–3 Expectation–maximization (EM) algorithm 203–6 Expected tail loss (ETL) 104–5 Expected utility 227–8 Expected value (expectation) 78–9 Explained sum of squares (ESS) 149–50, 159–62 Explanatory variables 143, 157, 170 Explicit function 185 Exponential distribution 87–9 Exponential function 1, 7–9, 34–5, 233–7 Exponential utility function 233–7 Extrapolation 186, 193–200, 223 Extreme value theory 101–3 Factorial function 8 Factorial notation 86 Factor model software 181 F distribution 100–1, 127 Feasible set 246 Finance calculus 1–36 essential algebra 37–70 numerical methods 185–223 portfolio mathematics 225–67 Financial market integration 180–1 Finite difference approximation 186, 206–10, 223 first/second order 206–7 the Greeks 207–8 partial differential equations 208–10 First difference operator, discrete time 17 First order autocorrelation 178 Forecasting 182, 254 Forward difference operator, returns 19, 22 Index Fréchet distribution 103 F test 127 FTSE 100 index 204–5, 242–4 Fully-funded portfolio 18 Functions 3–9, 26–31 absolute value 6 concave 13–14, 35 continuous 5–6, 35 convex 13–14, 35 differentiable 5–6, 35 distribution function 75–7, 114–15 explicit/implicit 185 exponential 1, 7–9, 34–5, 234–7 factorial 8 gamma 97–8 indicator 6 inverse 6–7, 35 Lagrangian 29–30 likelihood 72, 130–4 linear 4–5 logarithmic 1, 9, 34–5 monotonic 13–14, 35 non-linear 1–2 objective 29, 188 quadratic 4–5, 233–4 several variables 26–31 utility 232–4 Fundamental theorem of arbitrage 212 Future 1, 181–2, 194 Gamma function, Student t distribution 97–8 Gaussian copula 109–10 Gaussian kernel 107 Gauss–Markov theorem 157, 175, 184 Generalized extreme value (GEV) distribution 101–3 Generalized least squares (GLS) 178–9 Generalized Pareto distribution 101, 103–5 Generalized Sharpe ratio 262–3 General linear model, regression 161–2 Geometric Brownian motion 21–2 lognormal asset price distribution 213–14 SDE 134 stochastic process 141 time series of asset prices 218–20 GEV (generalized extreme value) distribution 101–3 Global asset management 225 Global minimum variance portfolio 244, 246–7 283 GLS (generalized least squares) 178–9 Goal Seek, Excel 186, 188–9 Gold index, CBOE 162–3, 168–70, 174 Goodness of fit 128, 149–50, 163–5, 167 Gradient vector 28 Graphs 3–9 The Greeks 207–8 Gumbel distribution 103 Heat equation 208 Hedging 2, 181–2 Hermite polynomials 200 Hessian matrix 28–30, 132, 192–3 Heteroscedasticity 175–9, 184 Higher moment CAPM model 255 Histogram 76–8 Historical data, VaR 106 Homoscedasticity 135 h-period log return 23–4 Hyperbola 5 Hypothesis tests 72, 124–5 CAPM 254–5 regression 151–2, 163–6 Student t distribution 100 Identity matrix 40–1 i.i.d.

 

How I Became a Quant: Insights From 25 of Wall Street's Elite by Richard R. Lindsey, Barry Schachter

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, algorithmic trading, Andrew Wiles, Antoine Gombaud: Chevalier de Méré, asset allocation, backtesting, bank run, banking crisis, Black-Scholes formula, Bretton Woods, Brownian motion, business process, capital asset pricing model, collateralized debt obligation, corporate governance, correlation coefficient, Credit Default Swap, credit default swaps / collateralized debt obligations, currency manipulation / currency intervention, discounted cash flows, disintermediation, diversification, Emanuel Derman, en.wikipedia.org, Eugene Fama: efficient market hypothesis, financial innovation, full employment, George Akerlof, Gordon Gekko, hiring and firing, implied volatility, index fund, interest rate derivative, interest rate swap, John von Neumann, linear programming, Loma Prieta earthquake, Long Term Capital Management, margin call, market friction, market microstructure, martingale, merger arbitrage, Nick Leeson, P = NP, pattern recognition, pensions crisis, performance metric, prediction markets, profit maximization, purchasing power parity, quantitative trading / quantitative finance, QWERTY keyboard, RAND corporation, random walk, Ray Kurzweil, Richard Stallman, risk-adjusted returns, risk/return, shareholder value, Sharpe ratio, short selling, Silicon Valley, six sigma, sorting algorithm, statistical arbitrage, statistical model, stem cell, Steven Levy, stochastic process, systematic trading, technology bubble, The Great Moderation, the scientific method, too big to fail, trade route, transaction costs, transfer pricing, value at risk, volatility smile, Wiener process, yield curve, young professional

In a key lucky break, Goldman Sachs decided (at our prodding) to seed with partner capital a very aggressive market-neutral hedge fund utilizing our new investment process.14 Although we had very strong results in general across many products (both long only and absolute return), over the next few years our results for this hedge fund were off the charts. These results were not just great, but much better than our own backtests, a key sign you’re getting at least somewhat lucky as an iron-clad rule is to expect results worse than your backtest. Don’t get me wrong, I think we created some great models, but getting a lucky draw on top of a great model is a pretty wonderful thing to happen early in your career. (As they say in the novel Dune, “Beginnings are delicate times.”)15 A few years down the road, we were managing $7 billion, about $6 billion in long-only assets, and close to a billion in hedge fund assets, all with strong-to-stellar results.

My work during that next year was incredibly rewarding. The focus of the fund was to create automated trading strategies and apply them to global futures markets, including commodities, equities, and fixed income. As long as it was a valid futures market, we traded it, regardless if the prices represented Eurodollar contracts or Red Azuki Beans. I spent a lot of time writing very complex code to create and backtest different types of trading strategies using daily futures data back to the 1940s. Oodles of data, challenging analyses, and lots of programming— this is exactly what I had been doing in physics for a dozen years, and I was groovin’. But alas, I quickly came to realize that finance is not rocket science. After all, I was a rocket scientist and I knew the difference. This is because in physics statistical distributions arise from fundamental physical processes that can usually be modeled, and therefore future distributions can be predicted with amazing accuracy.

As a two-person operation, my first task was simple—recreate, from scratch, everything that the previous 30-person fund had done, but in a way that could be wholly automated and required no additional staff. Over that next year I coded day and night, and even purchased a $20,000 Sun SparcStation laptop (that’s right, a laptop) so I could code during my two-hour-per-day train commute. I was in heaven. I created my own futures backtesting language, a byte-code compiler, and an automated web-based trading system. With these tools I developed many new styles of trend following that had never been done before at the previous fund. Each night the system would upload the latest closing prices for each futures market, rerun my simulation routines, generate signals, and auto-fax trades to our London brokers for execution the next morning.

 

pages: 502 words: 107,657

Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die by Eric Siegel

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, algorithmic trading, Amazon Mechanical Turk, Apple's 1984 Super Bowl advert, backtesting, Black Swan, book scanning, bounce rate, business intelligence, business process, call centre, computer age, conceptual framework, correlation does not imply causation, crowdsourcing, dark matter, data is the new oil, en.wikipedia.org, Erik Brynjolfsson, experimental subject, Google Glasses, happiness index / gross national happiness, job satisfaction, Johann Wolfgang von Goethe, Machine translation of "The spirit is willing, but the flesh is weak." to Russian and back, Moneyball by Michael Lewis explains big data, Nate Silver, natural language processing, Netflix Prize, Network effects, placebo effect, prediction markets, Ray Kurzweil, recommendation engine, risk-adjusted returns, Search for Extraterrestrial Intelligence, self-driving car, sentiment analysis, software as a service, speech recognition, statistical model, Steven Levy, text mining, the scientific method, The Signal and the Noise by Nate Silver, The Wisdom of Crowds, Turing test, Watson beat the top human players on Jeopardy!, X Prize

But he was not granted access to the predictive model. With secrecy reigning supreme, the protocol for this type of audit dictated that John receive only the numerical results, along with a few adjectives that described its design: new, unique, powerful! With meager evidence, John sought to prove a crime he couldn’t even be sure had been committed. Before each launch, organizations establish confidence in PA by “predicting the past” (aka backtesting). The predictive model must prove itself on historical data before its deployment. Conducting a kind of simulated prediction, the model evaluates across data from last week, last month, or last year. Feeding on input that could only have been known at a given time, the model spits out its prediction, which then matches against what we now already know took place thereafter. Would the S&P 500 go down or up on March 21, 1991?

On a hunch, he hand-crafted a method with the same type of bug, and showed that its predictions closely matched those of the trading system. A predictive model will sink faster than the Titanic if you don’t seal all its “time leaks” before launch. But this kind of “leak from the future” is common, if mundane. Although core to the very integrity of prediction, it’s an easy mistake to make, given that each model is backtested over historical data for which prediction is not, strictly speaking, possible. The relative future is always readily available in the testing data, easy to inadvertently incorporate into the very model trying to predict it. Such temporal leaks achieve status as a commonly known gotcha among PA practitioners. If this were an episode of Star Trek, our beloved, hypomanic engineer Scotty would be screaming, “Captain, we’re losing our temporal integrity!”

He had also taken on predicting the species of a bat from its echolocation signals (the chirps bats make for their radar). And in the commercial world, John’s pregrad positions had dropped him right into the thick of machine learning systems that steer for aerospace flight and that detect cooling pipe cracks in nuclear reactors, not to mention projects for Delta Financial looking over the shoulders of other black box quants. And now John’s latest creation absolutely itched to be deployed. Backtesting against historical data, all indications whispered confident promises for what this thing could do once set in motion. As John puts it, “A slight pattern emerged from the overwhelming noise; we had stumbled across a persistent pricing inefficiency in a corner of the market, a small edge over the average investor, which appeared repeatable.” Inefficiencies are what traders live for. A perfectly efficient market can’t be played, but if you can identify the right imperfection, it’s payday.

 

pages: 1,088 words: 228,743

Expected Returns: An Investor's Guide to Harvesting Market Rewards by Antti Ilmanen

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Andrei Shleifer, asset allocation, availability heuristic, backtesting, balance sheet recession, bank run, banking crisis, barriers to entry, Bernie Madoff, Bernie Madoff, Black Swan, Bretton Woods, capital asset pricing model, capital controls, Carmen Reinhart, central bank independence, collateralized debt obligation, commodity trading advisor, corporate governance, credit crunch, Credit Default Swap, credit default swaps / collateralized debt obligations, debt deflation, deglobalization, delta neutral, demand response, discounted cash flows, disintermediation, diversification, diversified portfolio, dividend-yielding stocks, equity premium, Eugene Fama: efficient market hypothesis, fiat currency, financial deregulation, financial innovation, financial intermediation, Flash crash, framing effect, frictionless, frictionless market, George Akerlof, global reserve currency, Google Earth, high net worth, hindsight bias, Hyman Minsky, implied volatility, income inequality, incomplete markets, index fund, inflation targeting, interest rate swap, invisible hand, Kenneth Rogoff, laissez-faire capitalism, law of one price, Long Term Capital Management, loss aversion, margin call, market bubble, market clearing, market friction, market fundamentalism, market microstructure, merger arbitrage, mittelstand, moral hazard, New Journalism, oil shock, p-value, passive investing, performance metric, Ponzi scheme, prediction markets, price anchoring, price stability, principal–agent problem, private sector deleveraging, purchasing power parity, quantitative easing, quantitative trading / quantitative finance, random walk, reserve currency, Richard Thaler, risk tolerance, risk-adjusted returns, risk/return, riskless arbitrage, Robert Shiller, Robert Shiller, savings glut, Sharpe ratio, short selling, statistical arbitrage, statistical model, stochastic volatility, systematic trading, The Great Moderation, too big to fail, transaction costs, tulip mania, value at risk, volatility arbitrage, volatility smile, working-age population, Y2K, yield curve, zero-coupon bond

January excess return has been negative in only four years since 1976: in 1992, 1995, 1998, and 2008—each of which saw much worse drawdowns from the carry strategy later in the year. This outcome may be a coincidence but it echoes the finding that January equity market performance has some predictive ability for rest-of-the-year returns (further details in Chapter 25). Incorporating these two seasonal biases would easily improve backtested FX carry strategy performance—for example, doubling position sizes for January and halving sizes for the rest of the year if the January return had been negative would have boosted the Sharpe ratio since 1983 from 0.6 to 0.8. Any such backtest improvements are subject to data-mining bias, so some skepticism is warranted. Conditioners (regime indicators) As we have seen, ex ante opportunities and seasonal effects have some ability to predict carry returns. However, the jackpot question of carry “timing” relates to carry crashes.

Periods of high realized returns and rising asset valuations—think stock markets in the 1990s—are often associated with falling forward-looking returns. • For specific funds and strategies, the historical performance data that investors get to see are often upward biased. This bias is due to the voluntary nature of performance reporting and survivorship bias (so that poor performers are left out of databases or are not marketed by the fund manager). A similar caveat applies to simulated “paper” portfolios because backtests may be overfitted and trading costs ignored or understated. These concerns notwithstanding, this book presents extensive evidence of long-run realized returns, when possible covering 50-to-100-year histories. Several main findings are familiar to most readers:• Stock markets have outperformed fixed income markets during the past century in all countries studied. The compound average real return for global equities between 1900 and 2009 is 5.4%, which is 3.7% (4.4%) higher than that of long-term government bonds (short-dated Treasury bills).

Pension funds match their liabilities best by buying long-dated real or nominal bonds. 4.6 BIASED RETURNS For many asset classes, returns may be positively or negatively biased over a given historical sample. For active asset managers with voluntary reporting, published returns are almost certainly upward biased. Section 11.4 reviews a host of selection biases such as survivorship bias and backfill bias in the context of hedge fund return databases, but similar caveats apply to the reported performance of other managers. Backtested results of active strategies also suffer from overfitting and data-mining biases, which also overstate published returns. Whenever we observe exceptionally attractive historical returns, it is healthy to adopt a skeptical approach. The financial industry has limited incentives to emphasize this needed skepticism beyond printing required disclaimers, while our innate tendencies for extrapolation and optimism make most of us too easy prey for the upbeat marketing of past performance. 4.7 NOTES [1] The distinction between realized (ex post) and expected (ex ante) returns should be crystal clear.

 

pages: 512 words: 162,977

New Market Wizards: Conversations With America's Top Traders by Jack D. Schwager

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

backtesting, Benoit Mandelbrot, Berlin Wall, Black-Scholes formula, butterfly effect, commodity trading advisor, Elliott wave, full employment, implied volatility, interest rate swap, Louis Bachelier, margin call, market clearing, market fundamentalism, paper trading, pattern recognition, placebo effect, prediction markets, random walk, risk tolerance, risk/return, Saturday Night Live, Sharpe ratio, transaction costs, War on Poverty

If the person is really hypnotized, you won’t be able to push the arm down, regardless of the force applied—even if the subject is a physically weak person. 446 / The New Market Wizard Was there anything memorable about your first trading client? I would like to say that the procedure was immensely successful, but the truth is that the person didn’t experience any overnight transformation. It took many years before I realized why hypnosis was very effective with some traders but not others. What is the reason? Some traders have a valid methodology that they have adequately backtested and that their conscious mind is happy with. These are the traders who can usually be helped through hypnosis. The only thing hypnosis can do is to inform the subconscious mind that the person now has a valid methodology that the conscious mind has already accepted. But you must first be at that point. Absolutely. For a novice trader to try to become an expert trader through hypnosis is like a novice chess player seeking to become a master through hypnosis.

448 / The New Market Wizard No. Some people lose because they feel they don’t deserve to win, but more people lose because they never perform the basic tasks necessary to become a winning trader. What are those tasks? 1. 2. 3. 4. 5. Develop a competent analytical methodology. Extract a reasonable trading plan from this methodology. Formulate rules for this plan that incorporate money management techniques. Back-test the plan over a sufficiently long period. Exercise self-management so that you adhere to the plan. The best plan in the world cannot work if you don’t act on it. Typically, how do you work with someone who comes to you for help in improving his or her trading? The first thing I do is go though a series of about thirty questions that have only one purpose: finding out if the person has a methodology.

However, the losses from your previous Methodology A are so ingrained in your subconscious that whenever you contemplate making a trade, the adrenaline starts to flow, and the fear of executing a trade arises. Some traders are literally immobilized by this fear at the moment when they need to act. This is the “freeze” that I encountered when I returned to trading years after my first painful experience. If you have truly back-tested a methodology and are employing an effective trading plan, your conscious mind is already aware of its validity. It’s your subconscious mind that prevents you from taking correct action in the market. The problem will persist until you convince the subconscious in a very direct manner that the new methodology is valid and that it has to forget about the old methodology. How is this transformation achieved?

 

pages: 272 words: 19,172

Hedge Fund Market Wizards by Jack D. Schwager

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

backtesting, banking crisis, barriers to entry, Bernie Madoff, Bernie Madoff, Black-Scholes formula, British Empire, Claude Shannon: information theory, cloud computing, collateralized debt obligation, commodity trading advisor, credit crunch, Credit Default Swap, credit default swaps / collateralized debt obligations, delta neutral, diversification, diversified portfolio, financial independence, Flash crash, hindsight bias, implied volatility, index fund, James Dyson, Long Term Capital Management, margin call, market bubble, market fundamentalism, merger arbitrage, oil shock, pattern recognition, pets.com, Ponzi scheme, private sector deleveraging, quantitative easing, quantitative trading / quantitative finance, risk tolerance, risk-adjusted returns, risk/return, riskless arbitrage, Sharpe ratio, short selling, statistical arbitrage, Steve Jobs, systematic trading, technology bubble, transaction costs, value at risk, yield curve

Beginning around 1980, I developed a discipline that whenever I put on a trade, I would write down the reasons on a pad. When I liquidated the trade, I would look at what actually happened and compare it with my reasoning and expectations when I put on the trade. Learning solely from actual experience, however, is inadequate because it takes too much time to get a representative sample to determine whether a decision rule works. I discovered that I could backtest the criteria that I wrote down to get a good perspective of how they would have performed and to refine them. The next step was to define decision rules based on the criteria. I required the decision rules to be logically based and was careful to avoid data mining. That’s how the Bridgewater system began and developed in the early years. That same process continued and was improved with the help of many others over the years.

But then the lab got crowded, and I had to give up one. I started thinking, This place is going to empty out sometime tonight. I decided to get all my data prepared so that I could simultaneously use many of the lab’s computers that night. I was very excited about the idea. People started to leave, and then I had two computers, then four, and eventually I was jumping between 20 computers running my backtests. Were you testing your system on one market on each computer? That is exactly what I was doing. I was so excited about the results I was getting that I worked all night and continued through the next day. It was going so well that I pulled a second all-nighter. I worked for nearly 40 hours straight, keeping myself awake with the caffeine from drinking a Pepsi every hour. I was still living on the farm at the time.

But as assets under management increased, and I realized it was best to use the same models across all markets, I added substantially more markets to the portfolio. The transition to greater diversification also helped improve performance. By 1994, I was trading about 20 markets, and I was no longer using market-specific models. Those changes made a big difference. When you were only trading two or three markets, how did you decide which markets to trade? That was part of the problem. I was cherry-picking the markets that looked best in backtesting. It sounds like you were still making some rookie curve-fitting mistakes at that time. Absolutely. I was still making some very bad data mining errors in those initial years. Was the system you were using at Blue Ridge after you switched to using the same models on all markets an early version of what you ended up doing at QIM? It was similar, but much less sophisticated—fewer models generated with far less computing power.

 

pages: 312 words: 91,538

The Fear Index by Robert Harris

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

algorithmic trading, backtesting, banking crisis, dark matter, Fellow of the Royal Society, Flash crash, high net worth, implied volatility, mutually assured destruction, Renaissance Technologies, speech recognition

There is also what we call a “clinging” effect, when a stock is held in defiance of reason, and an “adrenalin” effect, when a stock rises strongly in value. We’re still researching all these various categories to determine market impact and refine our model.’ Easterbrook raised his hand. ‘Yes, Bill?’ ‘Is this algorithm already operational?’ ‘Why don’t I let Hugo answer that, as it’s practical rather than theoretical?’ Quarry said, ‘Incubation started back-testing VIXAL-1 almost two years ago, although naturally that was just a simulation, without any actual exposure to the market. We went live with VIXAL-2 in May 2009, with play money of one hundred million dollars. When we overcame the early teething problems we moved on to VIXAL-3 in November and gave it access to one billion. That was so successful we decided to allow VIXAL-4 to take control of the entire fund one week ago.’

Then Quarry had hit the road of investors’ conferences, moving from city to city in the US and across Europe, pulling his wheeled suitcase through fifty different airports. He had loved this part – loved being a salesman, he who travels alone, walking in cold to an air-conditioned conference room in a strange hotel overlooking some sweltering freeway and charming a sceptical audience. His method was to show them the independently back-tested results of Hoffmann’s algorithm and the mouth-watering projections of future returns, then break it to them that the fund was already closed: he had only fulfilled his engagement to speak in order to be polite but they didn’t need any more money, sorry. Afterwards the investors would come looking for him in the hotel bar; it worked nearly every time. Quarry had hired a guy from BNP Paribas to oversee the back office, a receptionist, a secretary, and a French fixed-income trader from AmCor who had run into some regulatory issues and needed to get out of London fast.

 

pages: 752 words: 131,533

Python for Data Analysis by Wes McKinney

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

backtesting, cognitive dissonance, crowdsourcing, Debian, Firefox, Google Chrome, index card, random walk, recommendation engine, revision control, sentiment analysis, Sharpe ratio, side project, sorting algorithm, statistical model, type inference

First, I’ll load historical prices for a portfolio of financial and technology stocks: names = ['AAPL', 'GOOG', 'MSFT', 'DELL', 'GS', 'MS', 'BAC', 'C'] def get_px(stock, start, end): return web.get_data_yahoo(stock, start, end)['Adj Close'] px = DataFrame({n: get_px(n, '1/1/2009', '6/1/2012') for n in names}) We can easily plot the cumulative returns of each stock (see Figure 11-2): In [117]: px = px.asfreq('B').fillna(method='pad') In [118]: rets = px.pct_change() In [119]: ((1 + rets).cumprod() - 1).plot() For the portfolio construction, we’ll compute momentum over a certain lookback, then rank in descending order and standardize: def calc_mom(price, lookback, lag): mom_ret = price.shift(lag).pct_change(lookback) ranks = mom_ret.rank(axis=1, ascending=False) demeaned = ranks - ranks.mean(axis=1) return demeaned / demeaned.std(axis=1) With this transform function in hand, we can set up a strategy backtesting function that computes a portfolio for a particular lookback and holding period (days between trading), returning the overall Sharpe ratio: compound = lambda x : (1 + x).prod() - 1 daily_sr = lambda x: x.mean() / x.std() def strat_sr(prices, lb, hold): # Compute portfolio weights freq = '%dB' % hold port = calc_mom(prices, lb, lag=1) daily_rets = prices.pct_change() # Compute portfolio returns port = port.shift(1).resample(freq, how='first') returns = daily_rets.resample(freq, how=compound) port_rets = (port * returns).sum(axis=1) return daily_sr(port_rets) * np.sqrt(252 / hold) Figure 11-2.

Cumulative returns for each of the stocks When called with the prices and a parameter combination, this function returns a scalar value: In [122]: strat_sr(px, 70, 30) Out[122]: 0.27421582756800583 From there, you can evaluate the strat_sr function over a grid of parameters, storing them as you go in a defaultdict and finally putting the results in a DataFrame: from collections import defaultdict lookbacks = range(20, 90, 5) holdings = range(20, 90, 5) dd = defaultdict(dict) for lb in lookbacks: for hold in holdings: dd[lb][hold] = strat_sr(px, lb, hold) ddf = DataFrame(dd) ddf.index.name = 'Holding Period' ddf.columns.name = 'Lookback Period' To visualize the results and get an idea of what’s going on, here is a function that uses matplotlib to produce a heatmap with some adornments: import matplotlib.pyplot as plt def heatmap(df, cmap=plt.cm.gray_r): fig = plt.figure() ax = fig.add_subplot(111) axim = ax.imshow(df.values, cmap=cmap, interpolation='nearest') ax.set_xlabel(df.columns.name) ax.set_xticks(np.arange(len(df.columns))) ax.set_xticklabels(list(df.columns)) ax.set_ylabel(df.index.name) ax.set_yticks(np.arange(len(df.index))) ax.set_yticklabels(list(df.index)) plt.colorbar(axim) Calling this function on the backtest results, we get Figure 11-3: In [125]: heatmap(ddf) Figure 11-3. Heatmap of momentum strategy Sharpe ratio (higher is better) over various lookbacks and holding periods Future Contract Rolling A future is an ubiquitous form of derivative contract; it is an agreement to take delivery of a certain asset (such as oil, gold, or shares of the FTSE 100 index) on a particular date. In practice, modeling and trading futures contracts on equities, currencies, commodities, bonds, and other asset classes is complicated by the time-limited nature of each contract.

 

Stocks for the Long Run, 4th Edition: The Definitive Guide to Financial Market Returns & Long Term Investment Strategies by Jeremy J. Siegel

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

asset allocation, backtesting, Black-Scholes formula, Bretton Woods, California gold rush, capital asset pricing model, cognitive dissonance, compound rate of return, correlation coefficient, Daniel Kahneman / Amos Tversky, diversification, diversified portfolio, dividend-yielding stocks, equity premium, Eugene Fama: efficient market hypothesis, German hyperinflation, implied volatility, index arbitrage, index fund, joint-stock company, Long Term Capital Management, loss aversion, market bubble, new economy, oil shock, passive investing, prediction markets, price anchoring, price stability, purchasing power parity, random walk, Richard Thaler, risk tolerance, risk/return, Robert Shiller, Robert Shiller, Ronald Reagan, shareholder value, short selling, South Sea Bubble, technology bubble, The Great Moderation, The Wisdom of Crowds, transaction costs, tulip mania, Vanguard fund

262 Index Options 264 Buying Index Options 266 Selling Index Options 267 The Importance of Indexed Products 267 Chapter 16 Market Volatility 269 The Stock Market Crash of October 1987 271 The Causes of the October 1987 Crash 273 Exchange-Rate Policies 274 The Futures Market 275 Circuit Breakers 276 The Nature of Market Volatility 277 Historical Trends of Stock Volatility 278 The Volatility Index (VIX) 281 Recent Low Volatility 283 The Distribution of Large Daily Changes 283 The Economics of Market Volatility 285 The Significance of Market Volatility 286 Chapter 17 Technical Analysis and Investing with the Trend 289 The Nature of Technical Analysis 289 Charles Dow, Technical Analyst 290 The Randomness of Stock Prices 291 Simulations of Random Stock Prices 292 Trending Markets and Price Reversals 294 Moving Averages 295 Testing the Dow Jones Moving-Average Strategy 296 Back-Testing the 200-Day Moving Average 297 The Nasdaq Moving-Average Strategy 300 CONTENTS CONTENTS xiii Distribution of Gains and Losses 301 Momentum Investing 302 Conclusion 303 Chapter 18 Calendar Anomalies 305 Seasonal Anomalies 306 The January Effect 306 Causes of the January Effect 309 The January Effect Weakened in Recent Years 310 Large Monthly Returns 311 The September Effect 311 Other Seasonal Returns 315 Day-of-the-Week Effects 316 What’s an Investor to Do?

.: Dow Jones-Irwin, 1988. 11 Historically, the daily high and low levels of stock averages were calculated on the basis of the highest or lowest price of each stock reached at any time during the day. This is called the theoretical high or low. The actual high is the highest level reached at any given time by the stocks in the average. CHAPTER 17 Technical Analysis and Investing with the Trend 297 Back-Testing the 200-Day Moving Average In Figure 17-2 are the daily and 200-day moving averages of the Dow Jones Industrial Average during two select periods: from 1924 to 1936 and 1999 to 2006. The time periods when investors are out of the stock market are shaded; otherwise, investors are fully invested in stocks. Over the entire 120-year history of the Dow Jones average, the 200day moving-average strategy had its greatest triumph during the boom and crash of the 1920s and early 1930s.

 

pages: 236 words: 77,735

Rigged Money: Beating Wall Street at Its Own Game by Lee Munson

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

affirmative action, asset allocation, backtesting, barriers to entry, Bernie Madoff, Bernie Madoff, Bretton Woods, California gold rush, call centre, Credit Default Swap, diversification, diversified portfolio, estate planning, fiat currency, financial innovation, Flash crash, follow your passion, German hyperinflation, High speed trading, housing crisis, index fund, joint-stock company, moral hazard, passive investing, Ponzi scheme, price discovery process, random walk, risk tolerance, risk-adjusted returns, risk/return, too big to fail, trade route, Vanguard fund

It is 1990, and Cornelius is juggling debt, a mortgage, and still has to pay off his student loans to clown college. If it were 2010 he would be living at home begging you for the money. Only this time around, taxes can be avoided by investing in an IRA. By 1990, the wind is really at his back, because trades are cheaper from discount brokerage firms that didn’t previously exist, he can defer taxes because IRAs existed after 1974, and it’s easier to get the information to back-test investments, even before the Internet was widely used. 1990–2010 Results In 1990, Cornelius calculates that with his $100 he can buy 2.76 shares of Disney, 1.23 shares of Eastman Kodak, 1.18 shares of IBM, 3.45 shares of Coca-Cola, and 19.23 shares of Philip Morris (Table 1.3). Just like you, his broker doesn’t charge him any commissions. He doesn’t have to pay any taxes when he reinvests the dividends because he’s going to use an IRA.

 

pages: 192 words: 75,440

Getting a Job in Hedge Funds: An Inside Look at How Funds Hire by Adam Zoia, Aaron Finkel

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

backtesting, barriers to entry, collateralized debt obligation, commodity trading advisor, Credit Default Swap, credit default swaps / collateralized debt obligations, discounted cash flows, high net worth, interest rate derivative, interest rate swap, Long Term Capital Management, merger arbitrage, offshore financial centre, random walk, Renaissance Technologies, risk-adjusted returns, rolodex, short selling, side project, statistical arbitrage, systematic trading, unpaid internship, value at risk, yield curve, yield management

SAMPLE JOB SEARCHES To further illustrate what hedge funds look for when hiring various types of risk managers, we thought it would be helpful to include some job specifications from actual searches. Search 1: Hedge Fund Risk Analyst Note: This fund has a director of risk management who is looking for an additional resource (risk analyst) to join his team and develop within the firm. Description • Responsible for periodic report production, including: • Value at risk (VaR) and volatility reporting by portfolio. • Back-testing and historical performance measurement. • Portfolio segmentation analysis. • Factor analysis reporting. • Position level: • Expected return by position. • Risk analysis by position. • Marginal impact. • Relative risk/reward performance: • Stress testing. c07.indd 91 1/10/08 11:08:07 AM 92 Getting a Job in Hedge Funds • Correlation and concentration reporting by name, sector, and industry. • Responsible for the development and maintenance of a risk management database: • Creation of a centralized risk management database repository. • Daily data extraction from trading systems (Eze Castle) and accounting systems (VPM). • Maintenance of a security master and entity master tables. • Sourcing and storage of market pricing information. • Data cleaning and standardization. • Automation of data feeds from the risk management database to other applications (e.g., RiskMetrics) or models. • Supporting portfolio analysis: • Position and portfolio volatility analysis. • Correlation and factor model development. • Relative risk-adjusted performance measurement. • Historical and prospective analysis. • Analysis by position, portfolio, strategy, and so on. • Ad hoc analysis of portfolio.

 

pages: 224 words: 13,238

Electronic and Algorithmic Trading Technology: The Complete Guide by Kendall Kim

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

algorithmic trading, automated trading system, backtesting, corporate governance, Credit Default Swap, diversification, en.wikipedia.org, financial innovation, index arbitrage, index fund, interest rate swap, linked data, natural language processing, quantitative trading / quantitative finance, random walk, risk tolerance, risk-adjusted returns, short selling, statistical arbitrage, Steven Levy, transaction costs, yield curve

QSG currently provides three major services for their clients: . Factor analyst This stock selection research service leverages over 300 different stock selection indicators maintained and updated for portfolio construction and stock selection. . Virtual research analyst Portfolio managers can use this service to support any disciplined stock selection strategy. This research enables customization of candidate identification criteria, quick screening, backtesting, and quality control. Profiling the Leading Vendors 171 . T-Cost Pro A Web-based transaction cost management product capable of producing detailed analysis of time-stamped executions on a Tþ1 basis. QSG products are designed to help buy-side firms overcome the mediocrity associated with using simple benchmarks such as VWAP to conduct transaction cost analysis. QSG is currently in an ideal position to provide TCA service to buy-side firms and is also working on developing pre-trade analytics to provide additional structure to a growing algorithmic trading market.

 

pages: 317 words: 84,400

Automate This: How Algorithms Came to Rule Our World by Christopher Steiner

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

23andMe, Ada Lovelace, airport security, Al Roth, algorithmic trading, backtesting, big-box store, Black-Scholes formula, call centre, cloud computing, collateralized debt obligation, Credit Default Swap, credit default swaps / collateralized debt obligations, delta neutral, Donald Trump, Douglas Hofstadter, dumpster diving, Flash crash, Gödel, Escher, Bach, High speed trading, Howard Rheingold, index fund, John Maynard Keynes: technological unemployment, knowledge economy, late fees, Mark Zuckerberg, market bubble, medical residency, PageRank, pattern recognition, Paul Graham, prediction markets, quantitative hedge fund, Renaissance Technologies, ride hailing / ride sharing, risk tolerance, Sergey Aleynikov, side project, Silicon Valley, Skype, speech recognition, Spread Networks laid a new fibre optics cable between New York and Chicago, transaction costs, upwardly mobile, Watson beat the top human players on Jeopardy!, Y Combinator

Peterffy needed a way to express all of this in one elegant algorithm that rightly weighted each factor. It was a complicated math problem that he found nearly impossible to solve. He cycled through spurts of dejection and inspiration. After working on the problem for more than a year, Peterffy devised an algorithm of differential equations that cleverly weighted all of the ingredients. He back-tested the algorithm to see if it would have made money in the past, but the data sets for commodities options at that point in history were limited. This was before computers handled such things adeptly and, more important, before the options market had much history. So Mocatta did the only thing it could: it started trading with the algorithm. It made money. The options markets weren’t the giant realms they are today, so the algorithm wasn’t able to harvest billions of dollars, but it gave Mocatta’s traders a big edge.

 

pages: 302 words: 86,614

The Alpha Masters: Unlocking the Genius of the World's Top Hedge Funds by Maneet Ahuja, Myron Scholes, Mohamed El-Erian

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Asian financial crisis, asset allocation, backtesting, Bernie Madoff, Bretton Woods, business process, call centre, collateralized debt obligation, corporate governance, credit crunch, Credit Default Swap, credit default swaps / collateralized debt obligations, diversification, Donald Trump, en.wikipedia.org, high net worth, interest rate derivative, Long Term Capital Management, Mark Zuckerberg, merger arbitrage, oil shock, pattern recognition, Ponzi scheme, quantitative easing, quantitative trading / quantitative finance, Renaissance Technologies, risk-adjusted returns, risk/return, rolodex, short selling, Silicon Valley, South Sea Bubble, statistical model, Steve Jobs, systematic trading

In January 2008, Dalio forewarned of the dangers of overreliance on tools like historical models during an interview with the Financial Times. “What is the most common mistake of investors?” he warned. “It is believing that things that worked in the past will continue to work and leveraging up to be on it. Nowadays, with the computer, it is easy to identify what would have worked and, with financial engineering, to create overoptimized strategies. I believe we are entering a period that will not be consistent with the back-testing, and problems will arise. When that dynamic exists and there’s close to zero interest rate, we knew that the ability of the central bank to ease monetary policy is limited.” When Dalio looks at the world today, he sees it divided into two parts—debtor-developed deficit countries and emerging market creditor countries. He further breaks it down into countries that have independent currency policies, and those whose currency and interest rate policies are linked.

 

pages: 317 words: 106,130

The New Science of Asset Allocation: Risk Management in a Multi-Asset World by Thomas Schneeweis, Garry B. Crowder, Hossein Kazemi

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

asset allocation, backtesting, Bernie Madoff, Black Swan, capital asset pricing model, collateralized debt obligation, commodity trading advisor, correlation coefficient, Credit Default Swap, credit default swaps / collateralized debt obligations, diversification, diversified portfolio, high net worth, implied volatility, index fund, interest rate swap, invisible hand, market microstructure, merger arbitrage, moral hazard, passive investing, Richard Feynman: Challenger O-ring, risk tolerance, risk-adjusted returns, risk/return, Sharpe ratio, short selling, statistical model, systematic trading, technology bubble, the market place, Thomas Kuhn: the structure of scientific revolutions, transaction costs, value at risk, yield curve

It is rather like the story of the individual asking for help in finding his watch only to Other Liquidity risk Regulatory risk Concentration risk Leverage risk Operational risk Key person Reputational risk Counterparty risk Transparency risk Model risk Complexity risk Derivatives risk Credit risk Market risk Written Due Diligence Valuation Policies Stress Testing Performance Measures Written Policies and Guidelines Acknowledged Fiduciaries Adequate Systems and Procedures Risk Limits Model Review Backtesting Independent Risk Oversight Backup and Disaster Recovery Education and Knowledge Clear Organization Structure Compliance monitoring EXHIBIT 2.1 Array of Risk Determinants 23 24 THE NEW SCIENCE OF ASSET ALLOCATION be asked: Where did you lose it? When responding that he lost it across the street but is looking under the lamp because the light is better here directly illustrates the point.

 

pages: 345 words: 87,745

The Power of Passive Investing: More Wealth With Less Work by Richard A. Ferri

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

asset allocation, backtesting, Bernie Madoff, Bernie Madoff, capital asset pricing model, cognitive dissonance, correlation coefficient, Daniel Kahneman / Amos Tversky, diversification, diversified portfolio, endowment effect, estate planning, Eugene Fama: efficient market hypothesis, implied volatility, index fund, Long Term Capital Management, passive investing, Ponzi scheme, prediction markets, random walk, Richard Thaler, risk tolerance, risk-adjusted returns, risk/return, Sharpe ratio, too big to fail, transaction costs, Vanguard fund, yield curve

A benchmark index is also known in the industry as a plain vanilla index and a beta seeking index. What qualifies as an index has broadened over the years as more ETFs come to market that follow highly customized nonstandard index methods. Today, it seems as though anything can be called an index. An index provider merely creates a mechanical set of rules for security selection, security weighting, and trading, and publishes their back-tested results. For example, an index may be made up of only dividend paying stocks with those stocks being weighted by dividend yield. Or, an index could include companies located west of the Mississippi that have female CEOs under the age of 50. Such an index doesn’t exist, but it would if a fund company thought they could sell an index fund or ETF to enough people based on that index. Buy the Benchmarks Benchmarks are the only type of index that passive investors should care about because they represent market returns and all subsections of a market.

 

pages: 394 words: 85,252

The New Sell and Sell Short: How to Take Profits, Cut Losses, and Benefit From Price Declines by Alexander Elder

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Atul Gawande, backtesting, Checklist Manifesto, impulse control, paper trading, short selling, systematic trading, The Wealth of Nations by Adam Smith

Still, it would be incorrect to expect a lower stress level. Riding a trend is like riding a bucking horse that tries to shake you off. Holding on to a trend-following trade requires a great deal of patience and self-assurance—a lot of mental work. Question 7—System vs. Discretionary Trading Answer 3 Selection “A greater degree of freedom” is incorrect. System traders who have done a lot of backtesting can have a fairly high level of confidence knowing what profits or losses to expect down the road. If they have the discipline to follow all the signals of their system, they will lower their stress level, insulating themselves to a degree from uncertainty in the markets. What they give up is the freedom to make decisions as market conditions change, creating new threats or opportunities. Question 8—Technical Toolbox Answer 3 “Five bullets to a clip” allows you to use only five indicators.

 

pages: 416 words: 118,592

A Random Walk Down Wall Street: The Time-Tested Strategy for Successful Investing by Burton G. Malkiel

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

accounting loophole / creative accounting, Albert Einstein, asset allocation, backtesting, Bernie Madoff, Bernie Madoff, BRICs, capital asset pricing model, compound rate of return, correlation coefficient, Credit Default Swap, Daniel Kahneman / Amos Tversky, diversification, diversified portfolio, Elliott wave, Eugene Fama: efficient market hypothesis, experimental subject, feminist movement, financial innovation, framing effect, hindsight bias, Home mortgage interest deduction, index fund, invisible hand, Long Term Capital Management, loss aversion, margin call, market bubble, mortgage tax deduction, new economy, Own Your Own Home, passive investing, pets.com, Ponzi scheme, price stability, profit maximization, purchasing power parity, RAND corporation, random walk, Richard Thaler, risk tolerance, risk-adjusted returns, risk/return, Robert Shiller, Robert Shiller, short selling, Silicon Valley, South Sea Bubble, The Wisdom of Crowds, transaction costs, Vanguard fund, zero-coupon bond

If there’s nothing investors can exploit in a systematic way, time in and time out, then it’s very hard to say that information is not being properly incorporated into stock prices…. Real money investment strategies don’t produce the results that academic papers say they should. Roll’s final point was underscored for me during an exchange I had with a portfolio manager who used the most modern quantitative methods to run his portfolio on the basis of all the statistical work done by academics and practitioners. He “back-tested” his technique with historical data over a twenty-year period and found that it outperformed the Standard & Poor’s 500-Stock Index by three percentage points per year. But when he started using those quantitative methods with real money, his results were quite different. Over the next twenty-year period, he barely managed to equal the S&P return after expenses. This was an extraordinary performance and ranked him in the top 10 percent of all money managers.

 

pages: 431 words: 132,416

No One Would Listen: A True Financial Thriller by Harry Markopolos

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

backtesting, barriers to entry, Bernie Madoff, Bernie Madoff, call centre, centralized clearinghouse, correlation coefficient, diversified portfolio, Emanuel Derman, Eugene Fama: efficient market hypothesis, forensic accounting, high net worth, index card, Long Term Capital Management, Louis Bachelier, offshore financial centre, Ponzi scheme, price mechanism, quantitative trading / quantitative finance, regulatory arbitrage, Renaissance Technologies, risk-adjusted returns, risk/return, rolodex, Sharpe ratio, statistical arbitrage, too big to fail, transaction costs

I’d had a lot of experience running these types of option-sensitive products. It took several months of playing with numbers to fulfill those parameters. Neil, of course, was a major contributor, and I got a lot of data from various major firms. From Citigroup for example, I got the complete S&P 500 price return histories from 1926 to the day I received it. Then I began putting things in, taking things out, testing and retesting and back-testing to see how each package would perform in various market environments. I did this knowing full well that Bernie hadn’t bothered to do any of this. He just sat down and made it up. It’s considerably easier that way—and you always get the results you want! Eventually I developed a product we named the Rampart Options Statistical Arbitrage. It was a product that would do extremely well in a market environment with low to moderately high volatility.

 

pages: 289 words: 113,211

A Demon of Our Own Design: Markets, Hedge Funds, and the Perils of Financial Innovation by Richard Bookstaber

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

affirmative action, Albert Einstein, asset allocation, backtesting, Black Swan, Black-Scholes formula, butterfly effect, commodity trading advisor, computer age, disintermediation, diversification, double entry bookkeeping, Edward Lorenz: Chaos theory, financial innovation, frictionless, frictionless market, George Akerlof, implied volatility, index arbitrage, Jeff Bezos, London Interbank Offered Rate, Long Term Capital Management, loose coupling, margin call, market bubble, market design, merger arbitrage, Mexican peso crisis / tequila crisis, moral hazard, new economy, Nick Leeson, oil shock, quantitative trading / quantitative finance, random walk, Renaissance Technologies, risk tolerance, risk/return, Robert Shiller, Robert Shiller, rolodex, Saturday Night Live, shareholder value, short selling, Silicon Valley, statistical arbitrage, The Market for Lemons, too big to fail, transaction costs, tulip mania, yield curve, zero-coupon bond

In mid-2002 the performance of stat arb strategies began to wane, and the standard methods have not recovered. This is not surprising, given the simplicity of the strategies, the ease of entry, and the proliferation of computer power. My son David had the bad luck to get started in this sort of strategy just as the window of opportunity was closing. The strategy had performed admirably in years of back-tests and in the first months of operation, but then sputtered along doing next to nothing. He closed it down from active trading after six months and then ran it on paper for another year, with no better results. The stat arb concept remains, but in place of the stat arb strategies of the late 1980s and the 1990s is an incarnation called high frequency trading. It performs the same liquidity function, but by monitoring aberrations in supply and demand based on real-time information.

 

Commodity Trading Advisors: Risk, Performance Analysis, and Selection by Greg N. Gregoriou, Vassilios Karavas, François-Serge Lhabitant, Fabrice Douglas Rouah

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Asian financial crisis, asset allocation, backtesting, capital asset pricing model, collateralized debt obligation, commodity trading advisor, compound rate of return, constrained optimization, corporate governance, correlation coefficient, Credit Default Swap, credit default swaps / collateralized debt obligations, discrete time, distributed generation, diversification, diversified portfolio, dividend-yielding stocks, high net worth, implied volatility, index arbitrage, index fund, interest rate swap, iterative process, linear programming, London Interbank Offered Rate, Long Term Capital Management, market fundamentalism, merger arbitrage, Mexican peso crisis / tequila crisis, p-value, Ponzi scheme, quantitative trading / quantitative finance, random walk, risk-adjusted returns, risk/return, Sharpe ratio, short selling, stochastic process, systematic trading, technology bubble, transaction costs, value at risk

Eagleeye also advises investment companies on hedging strategies, benchmark construction, index replication strategies, and risk management. He has been involved in the commodity markets since 1994. Prior to joining Premia, he developed programmed trading applications for Morgan Stanley’s Equity Division and proprietary computer models for urban economics. From 1994 to 1998 he worked in the Derivative Strategies Group of Putnam Investments where he researched, back-tested, and implemented relative-value derivatives strategies. Mr. Eagleeye holds a degree in Applied Mathematics from Yale University and an M.B.A. from the University of California at Berkeley. Andrew Green graduated in March 2004 with an MBA degree in Finance from Thunderbird, the American Graduate School of International Management. He is a former Research Assistant at the High Energy Particle Physics Lab of Colorado State University.

 

pages: 444 words: 151,136

Endless Money: The Moral Hazards of Socialism by William Baker, Addison Wiggin

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Andy Kessler, asset allocation, backtesting, bank run, banking crisis, Berlin Wall, Bernie Madoff, Bernie Madoff, Black Swan, Branko Milanovic, Bretton Woods, BRICs, capital asset pricing model, corporate governance, correlation does not imply causation, credit crunch, Credit Default Swap, cuban missile crisis, currency manipulation / currency intervention, debt deflation, Elliott wave, en.wikipedia.org, Fall of the Berlin Wall, feminist movement, fiat currency, floating exchange rates, Fractional reserve banking, full employment, German hyperinflation, housing crisis, income inequality, index fund, inflation targeting, Joseph Schumpeter, laissez-faire capitalism, land reform, liquidity trap, Long Term Capital Management, McMansion, moral hazard, mortgage tax deduction, naked short selling, offshore financial centre, Ponzi scheme, price stability, pushing on a string, quantitative easing, RAND corporation, reserve currency, riskless arbitrage, Ronald Reagan, school vouchers, seigniorage, short selling, Silicon Valley, six sigma, statistical arbitrage, statistical model, Steve Jobs, The Great Moderation, the scientific method, too big to fail, upwardly mobile, War on Poverty, young professional

In 1996, near the end of the best long-term equity return period of several lifetimes spanned by the Ibbotson data, appeared one Jeremy Siegel with his “definitive guide to high-return, low-risk equities,” a book for the masses titled Stocks for the Long Run. Siegel has escaped from the world of academia into the lucrative world of Wall Street through establishing WisdomTree, a provider of ETFs and mutual funds. WisdomTree tweaks the major indicies to squeeze out a slightly better return with less volatility—all based upon statistical analysis thoughtfully proven through roughly 40 years of backtesting. The strategy is to exploit a structural flaw that requires index funds to buy more of stocks that go up and sell as underperformers go down;—instead it does the opposite by slightly overweighting holdings of high dividend yielding or low P/E stocks. While the approach appears to be successful and probably improves upon the returns of individuals plunging their IRAs into hot tips heard at the country club tap room, demand for this product may be indicative of the public’s unwavering faith in equities and bonds, and buying-on-dips right up until the end.

 

Investment: A History by Norton Reamer, Jesse Downing

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, algorithmic trading, asset allocation, backtesting, banking crisis, Berlin Wall, Bernie Madoff, Bernie Madoff, Brownian motion, buttonwood tree, California gold rush, capital asset pricing model, Carmen Reinhart, carried interest, colonial rule, credit crunch, Credit Default Swap, Daniel Kahneman / Amos Tversky, debt deflation, discounted cash flows, diversified portfolio, equity premium, estate planning, Eugene Fama: efficient market hypothesis, Fall of the Berlin Wall, Fellow of the Royal Society, financial innovation, Gordon Gekko, Henri Poincaré, high net worth, index fund, interest rate swap, invention of the telegraph, James Hargreaves, James Watt: steam engine, joint-stock company, Kenneth Rogoff, land tenure, London Interbank Offered Rate, Long Term Capital Management, loss aversion, Louis Bachelier, margin call, means of production, Menlo Park, merger arbitrage, moral hazard, mortgage debt, Network effects, new economy, Nick Leeson, Own Your Own Home, pension reform, Ponzi scheme, price mechanism, principal–agent problem, profit maximization, quantitative easing, RAND corporation, random walk, Renaissance Technologies, Richard Thaler, risk tolerance, risk-adjusted returns, risk/return, Robert Shiller, Robert Shiller, Sand Hill Road, Sharpe ratio, short selling, Silicon Valley, South Sea Bubble, spinning jenny, statistical arbitrage, technology bubble, The Wealth of Nations by Adam Smith, too big to fail, transaction costs, underbanked, Vanguard fund, working poor, yield curve

The computer gave financial practitioners access to a wealth of information and data that was previously rather intractable to synthesize and without which it was virtually impossible to test rigorous models. Further, the very notion 264 Investment: A History of a quantitative fund—a “quant” fund—or a quantitative strategy is simply inconceivable without the computer. Without the aid of the computer, one could not construct and back-test robust models or even generate signals where certain criteria were met. The Hedge Fund Universe Today As of 2014, the hedge fund industry had approximately $2.5 trillion in assets under management. Additionally, approximately $455 billion was in funds of hedge funds, a diversified investment vehicle designed to add value by selecting and overseeing other hedge fund managers that generate alpha.20 Before discussing these funds of funds in more detail, let us consider the various strategies of individual funds.

 

pages: 566 words: 163,322

The Rise and Fall of Nations: Forces of Change in the Post-Crisis World by Ruchir Sharma

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

3D printing, Asian financial crisis, backtesting, bank run, banking crisis, Berlin Wall, Bernie Sanders, BRICs, business process, call centre, capital controls, Capital in the Twenty-First Century by Thomas Piketty, Carmen Reinhart, central bank independence, colonial rule, Commodity Super-Cycle, corporate governance, currency peg, dark matter, debt deflation, deglobalization, deindustrialization, demographic dividend, demographic transition, Deng Xiaoping, Doha Development Round, Donald Trump, Edward Glaeser, Elon Musk, eurozone crisis, failed state, Fall of the Berlin Wall, Francis Fukuyama: the end of history, Freestyle chess, Gini coefficient, hiring and firing, income inequality, industrial robot, inflation targeting, Internet of things, Jeff Bezos, job automation, Joseph Schumpeter, Kenneth Rogoff, knowledge economy, Malacca Straits, Mark Zuckerberg, market bubble, megacity, Mexican peso crisis / tequila crisis, mittelstand, moral hazard, North Sea oil, oil rush, oil shale / tar sands, oil shock, pattern recognition, Peter Thiel, pets.com, plutocrats, Plutocrats, Ponzi scheme, price stability, Productivity paradox, purchasing power parity, quantitative easing, Ralph Waldo Emerson, random walk, reserve currency, Ronald Reagan, savings glut, secular stagnation, Silicon Valley, Silicon Valley startup, smart cities, Snapchat, South China Sea, special economic zone, spectrum auction, Steve Jobs, The Wisdom of Crowds, Thomas Malthus, total factor productivity, trade liberalization, trade route, tulip mania, Tyler Cowen: Great Stagnation, unorthodox policies, Washington Consensus, WikiLeaks, women in the workforce, working-age population

The Practical Art These rules emerged from my twenty-five years on the road, trying to understand the forces of change both in theory and in the real world. The reason I developed rules at all was to focus my eyes and those of my team on what matters. When we visit a country, we gather impressions, storylines, facts, and data. While insight is embedded in all observations, we have to know which ones have a reliable history of telling us something about a nation’s future. The rules systematize our thoughts and have been back-tested to determine what has worked and what has not. Eliminating the inessential helps steer the conversation to what is relevant in evaluating whether a country is on the rise or in decline. I have narrowed the voluminous lists of growth factors to a number that is large enough to keep the most significant forces of change on our radar but small enough to be manageable. In theory, growth in an economy can be broken down in a number of ways, but some methods are more useful than others.

 

pages: 537 words: 144,318

The Invisible Hands: Top Hedge Fund Traders on Bubbles, Crashes, and Real Money by Steven Drobny

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, Asian financial crisis, asset allocation, backtesting, banking crisis, Bernie Madoff, Bernie Madoff, Black Swan, Bretton Woods, BRICs, British Empire, business process, capital asset pricing model, capital controls, central bank independence, collateralized debt obligation, Commodity Super-Cycle, commodity trading advisor, credit crunch, Credit Default Swap, credit default swaps / collateralized debt obligations, currency peg, debt deflation, diversification, diversified portfolio, equity premium, fiat currency, follow your passion, full employment, Hyman Minsky, implied volatility, index fund, inflation targeting, interest rate swap, inventory management, invisible hand, London Interbank Offered Rate, Long Term Capital Management, market bubble, market fundamentalism, market microstructure, moral hazard, North Sea oil, open economy, peak oil, pension reform, Ponzi scheme, prediction markets, price discovery process, price stability, private sector deleveraging, profit motive, purchasing power parity, quantitative easing, random walk, reserve currency, risk tolerance, risk-adjusted returns, risk/return, savings glut, Sharpe ratio, short selling, special drawing rights, statistical arbitrage, stochastic volatility, The Great Moderation, too big to fail, transaction costs, unbiased observer, value at risk, Vanguard fund, yield curve

We ran some simulations and discovered that even a tiny 5 percent leveraged allocation to long U.S. government fixed income would, over time, generate more absolute return, better ratios of return-to-worst-drawdown, and less significant absolute worst drawdown levels. We then conducted a simple study that adds leveraged bond positions to a portfolio of 100 percent long domestic U.S. equities. The back test results, from 1992 to 2009, show that adding 100 percent leverage to buy U.S. Treasuries increased annual yield by almost 5 percent while reducing the worst drawdown by 10 percent. Back-testing other periods, such as 1940 to 1980, yield less conclusive results; but it is clear more analytical work needs to be done in this area. It is also much too facile to say that leverage is bad in every occasion. Logically, since bonds can be repo’d at the cash rate and have a risk premium over cash, over time the cost of such insurance should actually be a positive to the fund (see box). (See Table 3.1.)

 

pages: 425 words: 122,223

Capital Ideas: The Improbable Origins of Modern Wall Street by Peter L. Bernstein

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, asset allocation, backtesting, Benoit Mandelbrot, Black-Scholes formula, Brownian motion, capital asset pricing model, debt deflation, diversified portfolio, Eugene Fama: efficient market hypothesis, financial innovation, financial intermediation, full employment, implied volatility, index arbitrage, index fund, interest rate swap, invisible hand, John von Neumann, Joseph Schumpeter, law of one price, linear programming, Louis Bachelier, mandelbrot fractal, martingale, means of production, new economy, New Journalism, profit maximization, Ralph Nader, RAND corporation, random walk, Richard Thaler, risk/return, Robert Shiller, Robert Shiller, Ronald Reagan, stochastic process, the market place, the scientific method, The Wealth of Nations by Adam Smith, Thorstein Veblen, transaction costs, transfer pricing, zero-coupon bond

Unlike Alexander, he had no computers to help him in this tiresome analysis; the only tools he used to prepare his graphs were a hand-cranked Monroe calculator and some sharp pencils. Although Fama’s efforts to develop profitable trading rules were by no means unsuccessful, the ones he found worked only on the old data, not on the new. He did not realize it at the time, but his frustrating experience was shared by many highly motivated investors seeking ways to beat the market. All too often, backtests give every promise of success but prove disappointing when investors try to apply them in real time. The environment shifts, market responses slow down or speed up, or too many people follow the same strategy and end up competing away one another’s potential profits. Like Alfred Cowles before him, Fama grew curious about why ideas that seem good on paper produce such disappointing results when real money is riding on them.

 

pages: 1,042 words: 266,547

Security Analysis by Benjamin Graham, David Dodd

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

backtesting, barriers to entry, capital asset pricing model, carried interest, collateralized debt obligation, collective bargaining, corporate governance, credit crunch, Credit Default Swap, credit default swaps / collateralized debt obligations, diversification, diversified portfolio, fear of failure, financial innovation, full employment, index fund, invisible hand, Joseph Schumpeter, locking in a profit, Long Term Capital Management, low cost carrier, moral hazard, mortgage debt, p-value, risk-adjusted returns, risk/return, secular stagnation, shareholder value, The Chicago School, the market place, the scientific method, The Wealth of Nations by Adam Smith, transaction costs, zero-coupon bond

(pp. 240–241) Today, of course, the activities of creditors’ committees, which play a major role in reorganizations, are closely supervised. 15 Neporent’s testimony is available at judiciary.senate.gov. 16 Bill Miller, “Good Times Are Coming!” Time, March 8, 2005. 17 Kenneth L. Fisher, 100 Minds That Made the Market (New York: Wiley, 2007), p. 61. Fisher goes on to observe of this late-in-life conversion: “Ironically, Graham’s adoption of ‘the efficient market’ was just before computer backtests would poke all kind of holes in that theory.” 18 Kenneth Lee, Trouncing the Dow: A Value-Based Method for Making Huge Profits (New York: McGraw-Hill, 1998), pp. 1–2. 19 In Berkshire Hathaway’s 2000 annual report, Buffett said of his experience in Graham’s class that “a few hours at the feet of the master proved far more valuable to me than had ten years of supposedly original thinking.” 20 Hamlet, Act III, Scene 2. 1 In the 1934 edition we had here a section on investment-quality senior issues obtainable at bargain levels.