backtesting

69 results back to index


Quantitative Trading: How to Build Your Own Algorithmic Trading Business by Ernie Chan

algorithmic trading, asset allocation, automated trading system, backtesting, Black Swan, Brownian motion, business continuity plan, buy and hold, compound rate of return, Edward Thorp, Elliott wave, endowment effect, fixed income, general-purpose programming language, index fund, John Markoff, Long Term Capital Management, loss aversion, p-value, paper trading, price discovery process, quantitative hedge fund, quantitative trading / quantitative finance, random walk, Ray Kurzweil, Renaissance Technologies, risk-adjusted returns, Sharpe ratio, short selling, statistical arbitrage, statistical model, survivorship bias, systematic trading, transaction costs

But more than just performing due diligence, doing the backtest yourself allows you to experiment with variations of the original strategy, thereby refining and improving the strategy. In this chapter, I will describe the common platforms that can be used for backtesting, various sources of historical data useful for backtesting, a minimal set of standard performance measures that a backtest should provide, common pitfalls to avoid, and simple refinements and improvements to strategies. A few fully developed backtesting examples will also be presented to illustrate the principles and techniques described. 31 P1: JYS c03 JWBK321-Chan September 24, 2008 13:52 Printer: Yet to come 32 QUANTITATIVE TRADING COMMON BACKTESTING PLATFORMS There are numerous commercial platforms that are designed for backtesting, some of them costing tens of thousands of dollars.

You can see where the maximum drawdown and P1: JYS c03 JWBK321-Chan September 24, 2008 13:52 Printer: Yet to come 50 QUANTITATIVE TRADING maximum drawdown duration occurred in this plot of the cumulative returns in Figure 3.1. 0.6 Cumulative Returns 0.5 0.4 10.53% max drawdown 0.3 0.2 497 days max drawdown duration 0.1 0 −0.1 0 200 400 600 800 Days 1000 1200 1400 1600 FIGURE 3.1 Maximum drawdown and maximum drawdown duration for Example 3.4 COMMON BACKTESTING PITFALLS TO AVOID Backtesting is the process of creating the historical trades given the historical information available at that time, and then finding out what the subsequent performance of those trades is. This process seems easy given that the trades were made using a computer algorithm in our case, but there are numerous ways in which it can go wrong. Usually, an erroneous backtest would produce a historical performance that is better than what we would have obtained in actual trading. We have already seen how survivorship bias in the data used for backtesting can result in inflated performance. There are, P1: JYS c03 JWBK321-Chan September 24, 2008 Backtesting 13:52 Printer: Yet to come 51 however, other common pitfalls related to how the backtest program is written, or more fundamentally, to how you construct your trading strategy.

24 vii P1: JYS fm JWBK321-Chan September 24, 2008 13:43 Printer: Yet to come viii Does the Strategy Suffer from Data-Snooping Bias? Does the Strategy “Fly under the Radar" of Institutional Money Managers? CONTENTS 25 27 Summary 28 CHAPTER 3 Backtesting 31 Common Backtesting Platforms 32 Excel 32 MATLAB 32 TradeStation 35 High-End Backtesting Platforms 35 Finding and Using Historical Databases 36 Are the Data Split and Dividend Adjusted? 36 Are the Data Survivorship Bias Free? 40 Does Your Strategy Use High and Low Data? 42 Performance Measurement 43 Common Backtesting Pitfalls to Avoid 50 Look-Ahead Bias 51 Data-Snooping Bias 52 Transaction Costs 60 Strategy Refinement 65 Summary 66 CHAPTER 4 Setting Up Your Business 69 Business Structure: Retail or Proprietary? 69 Choosing a Brokerage or Proprietary Trading Firm 71 Physical Infrastructure 75 Summary 77 CHAPTER 5 Execution Systems 79 What an Automated Trading System Can Do for You 79 Building a Semiautomated Trading System 81 Building a Fully Automated Trading System 84 Minimizing Transaction Costs 87 P1: JYS fm JWBK321-Chan September 24, 2008 13:43 Printer: Yet to come ix Contents Testing Your System by Paper Trading 89 Why Does Actual Performance Diverge from Expectations?


pages: 571 words: 105,054

Advances in Financial Machine Learning by Marcos Lopez de Prado

algorithmic trading, Amazon Web Services, asset allocation, backtesting, bioinformatics, Brownian motion, business process, Claude Shannon: information theory, cloud computing, complexity theory, correlation coefficient, correlation does not imply causation, diversification, diversified portfolio, en.wikipedia.org, fixed income, Flash crash, G4S, implied volatility, information asymmetry, latency arbitrage, margin call, market fragmentation, market microstructure, martingale, NP-complete, P = NP, p-value, paper trading, pattern recognition, performance metric, profit maximization, quantitative trading / quantitative finance, RAND corporation, random walk, risk-adjusted returns, risk/return, selection bias, Sharpe ratio, short selling, Silicon Valley, smart cities, smart meter, statistical arbitrage, statistical model, stochastic process, survivorship bias, transaction costs, traveling salesman

Those authors never tell us about all the tickets that were sold, that is, the millions of simulations it took to find these “lucky” alphas. The purpose of a backtest is to discard bad models, not to improve them. Adjusting your model based on the backtest results is a waste of time . . . and it's dangerous. Invest your time and effort in getting all the components right, as we've discussed elsewhere in the book: structured data, labeling, weighting, ensembles, cross-validation, feature importance, bet sizing, etc. By the time you are backtesting, it is too late. Never backtest until your model has been fully specified. If the backtest fails, start all over. If you do that, the chances of finding a false discovery will drop substantially, but still they will not be zero. 11.5 A Few General Recommendations Backtest overfitting can be defined as selection bias on multiple backtests. Backtest overfitting takes place when a strategy is developed to perform well on a backtest, by monetizing random historical patterns.

See Wu et al. [2004], and visit http://scikit-learn.org/stable/modules/svm.html#scores-and-probabilities. 2 Uncertainty is absolute when all outcomes are equally likely. CHAPTER 11 The Dangers of Backtesting 11.1 Motivation Backtesting is one of the most essential, and yet least understood, techniques in the quant arsenal. A common misunderstanding is to think of backtesting as a research tool. Researching and backtesting is like drinking and driving. Do not research under the influence of a backtest. Most backtests published in journals are flawed, as the result of selection bias on multiple tests (Bailey, Borwein, López de Prado, and Zhu [2014]; Harvey et al. [2016]). A full book could be written listing all the different errors people make while backtesting. I may be the academic author with the largest number of journal articles on backtesting1 and investment performance metrics, and still I do not feel I would have the stamina to compile all the different errors I have seen over the past 20 years.

There are many more, but really, there is no point in listing them, because of the title of the next section. 11.3 Even If Your Backtest Is Flawless, It Is Probably Wrong Congratulations! Your backtest is flawless in the sense that everyone can reproduce your results, and your assumptions are so conservative that not even your boss could object to them. You have paid for every trade more than double what anyone could possibly ask. You have executed hours after the information was known by half the globe, at a ridiculously low volume participation rate. Despite all these egregious costs, your backtest still makes a lot of money. Yet, this flawless backtest is probably wrong. Why? Because only an expert can produce a flawless backtest. Becoming an expert means that you have run tens of thousands of backtests over the years. In conclusion, this is not the first backtest you produce, so we need to account for the possibility that this is a false discovery, a statistical fluke that inevitably comes up after you run multiple tests on the same dataset.


Learn Algorithmic Trading by Sebastien Donadio

active measures, algorithmic trading, automated trading system, backtesting, Bayesian statistics, buy and hold, buy low sell high, cryptocurrency, DevOps, en.wikipedia.org, fixed income, Flash crash, Guido van Rossum, latency arbitrage, locking in a profit, market fundamentalism, market microstructure, martingale, natural language processing, p-value, paper trading, performance metric, prediction markets, quantitative trading / quantitative finance, random walk, risk tolerance, risk-adjusted returns, Sharpe ratio, short selling, sorting algorithm, statistical arbitrage, statistical model, stochastic process, survivorship bias, transaction costs, type inference, WebSocket, zero-sum game

Choice of IDE – Pycharm or Notebook Our first algorithmic trading (buy when the price is low, and sell when the price is high) Setting up your workspace PyCharm 101 Getting the data Preparing the data – signal Signal visualization Backtesting Summary Section 2: Trading Signal Generation and Strategies Deciphering the Markets with Technical Analysis Designing a trading strategy based on trend- and momentum-based indicators Support and resistance indicators Creating trading signals based on fundamental technical analysis Simple moving average Implementation of the simple moving average Exponential moving average Implementation of the exponential moving average Absolute price oscillator Implementation of the absolute price oscillator Moving average convergence divergence Implementation of the moving average convergence divergence Bollinger bands Implementation of Bollinger bands Relative strength indicator Implementation of the relative strength indicator Standard deviation Implementing standard derivatives Momentum Implementation of momentum Implementing advanced concepts, such as seasonality, in trading instruments Summary Predicting the Markets with Basic Machine Learning Understanding the terminology and notations Exploring our financial dataset Creating predictive models using linear regression methods Ordinary Least Squares Regularization and shrinkage – LASSO and Ridge regression Decision tree regression Creating predictive models using linear classification methods K-nearest neighbors Support vector machine Logistic regression Summary Section 3: Algorithmic Trading Strategies Classical Trading Strategies Driven by Human Intuition Creating a trading strategy based on momentum and trend following Examples of momentum strategies Python implementation Dual moving average Naive trading strategy Turtle strategy Creating a trading strategy that works for markets with reversion behavior Examples of reversion strategies Creating trading strategies that operate on linearly correlated groups of trading instruments Summary Sophisticated Algorithmic Strategies Creating a trading strategy that adjusts for trading instrument volatility Adjusting for trading instrument volatility in technical indicators Adjusting for trading instrument volatility in trading strategies Volatility adjusted mean reversion trading strategies Mean reversion strategy using the absolute price oscillator trading signal Mean reversion strategy that dynamically adjusts for changing volatility Trend-following strategy using absolute price oscillator trading signal Trend-following strategy that dynamically adjusts for changing volatility Creating a trading strategy for economic events Economic releases Economic release format Electronic economic release services Economic releases in trading Understanding and implementing basic statistical arbitrage trading strategies Basics of StatArb Lead-lag in StatArb Adjusting portfolio composition and relationships Infrastructure expenses in StatArb StatArb trading strategy in Python StatArb data set Defining StatArb signal parameters Defining StatArb trading parameters Quantifying and computing StatArb trading signals StatArb execution logic StatArb signal and strategy performance analysis Summary Managing the Risk of Algorithmic Strategies Differentiating between the types of risk and risk factors Risk of trading losses Regulation violation risks Spoofing Quote stuffing Banging the close Sources of risk Software implementation risk DevOps risk Market risk Quantifying the risk The severity of risk violations Differentiating the measures of risk Stop-loss Max drawdown Position limits Position holding time Variance of PnLs Sharpe ratio Maximum executions per period Maximum trade size Volume limits Making a risk management algorithm Realistically adjusting risk Summary  Section 4: Building a Trading System Building a Trading System in Python Understanding the trading system Gateways Order book management Strategy Order management system  Critical components Non-critical components Command and control Services Building a trading system in Python LiquidityProvider class Strategy class OrderManager class MarketSimulator class TestTradingSimulation class Designing a limit order book Summary Connecting to Trading Exchanges Making a trading system trade with exchanges Reviewing the Communication API Network basics Trading protocols FIX communication protocols Price updates Orders Receiving price updates Initiator code example Price updates Sending orders and receiving a market response Acceptor code example Market Data request handling Order Other trading APIs Summary Creating a Backtester in Python Learning how to build a backtester  In-sample versus out-of-sample data Paper trading (forward testing) Naive data storage HDF5 file Databases Relational databases Non-relational databases Learning how to choose the correct assumptions For-loop backtest systems Advantages Disadvantages Event-driven backtest systems Advantages Disadvantages Evaluating what the value of time is Backtesting the dual-moving average trading strategy For-loop backtester Event-based backtester Summary Section 5: Challenges in Algorithmic Trading Adapting to Market Participants and Conditions Strategy performance in backtester versus live markets Impact of backtester dislocations Signal validation Strategy validation Risk estimates Risk management system Choice of strategies for deployment Expected performance Causes of simulation dislocations Slippage Fees Operational issues Market data issues Latency variance Place-in-line estimates Market impact Tweaking backtesting and strategies in response to live trading Historical market data accuracy Measuring and modeling latencies Improving backtesting sophistication Adjusting expected performance for backtester bias Analytics on live trading strategies Continued profitability in algorithmic trading Profit decay in algorithmic trading strategies Signal decay due to lack of optimization Signal decay due to absence of leading participants Signal discovery by other participants Profit decay due to exit of losing participants Profit decay due to discovery by other participants Profit decay due to changes in underlying assumptions/relationships Seasonal profit decay Adapting to market conditions and changing participants Building a trading signals dictionary/database Optimizing trading signals Optimizing prediction models Optimizing trading strategy parameters Researching new trading signals Expanding to new trading strategies Portfolio optimization Uniform risk allocation PnL-based risk allocation PnL-sharpe-based risk allocation Markowitz allocation Regime Predictive allocation Incorporating technological advances Summary Final words Other Books You May Enjoy Leave a review - let other readers know what you think Preface In modern times, it is increasingly difficult to gain a significant competitive edge just by being faster than others, which means relying on sophisticated trading signals, predictive models, and strategies.

In this chapter, we will learn how backtesting works, and then we will talk about the assumptions you will need to consider when creating a backtester. Finally, we will provide a backtester example by using a momentum trading strategy. In this chapter, we will cover the following topics: Learning how to build a backtester Learning how to choose the correct assumptions Evaluating what the value of time is Backtesting the dual-moving average trading strategy Learning how to build a backtester Backtesting is key in the creation of trading strategies. It assesses how profitable a trading strategy is by using historical data. It helps to optimize it by running simulations that generate results showing risk and profitability before risking any capital loss. If the backtesting returns good results (high profits with reasonable risk), it will encourage getting this strategy to go alive.

If a strategy does not perform well with for-loop backtesters, this means that it will perform even worse on more realistic backtesters. Since it is important to have a backtester that's as realistic as possible, we will learn how an event-driven backtester works in the following section. Event-driven backtest systems An event-driven backtester uses almost all the components of the trading system. Most of the time, this type of backtester encompass all the trading system components (such as the order manager system, the position manager, and the risk manager). Since more components are involved, the backtester is more realistic. The event-driven backtester is close to the trading system we implemented in Chapter 7, Building a Trading System in Python. We left the code of the TradingSimulation.py file empty. In this section, we will see how to code that missing code.


Systematic Trading: A Unique New Method for Designing Trading and Investing Systems by Robert Carver

asset allocation, automated trading system, backtesting, barriers to entry, Black Swan, buy and hold, cognitive bias, commodity trading advisor, Credit Default Swap, diversification, diversified portfolio, easy for humans, difficult for computers, Edward Thorp, Elliott wave, fixed income, implied volatility, index fund, interest rate swap, Long Term Capital Management, margin call, merger arbitrage, Nick Leeson, paper trading, performance metric, risk tolerance, risk-adjusted returns, risk/return, Sharpe ratio, short selling, survivorship bias, systematic trading, technology bubble, transaction costs, Y Combinator, yield curve

Now you’re using Sharpe ratios (SR) to produce your handcrafted weights it’s worth reiterating that this is a mild form of in-sample back-test cheating, since you only use the final SR averaged over all data history, which you wouldn’t have at the beginning of the back-test.68 Again this is a fair criticism, but the problem is not that serious. The weights are still not extreme, so the effect on back-tested SR you get is modest compared to in-sample single period optimisation. However you should still be cautious of assuming that you’d be able to achieve the back-test SR in live trading. Table 14 shows you roughly how much you should degrade back-tested returns to get realistic achievable Sharpe ratios given a particular fitting technique for a system like the one I describe in chapter fifteen. 68. As with the standard handcrafted weights it’s possible to back-test this by doing the SR adjustment on an expanding window.

Recommended percentage volatility targets I run a highly diversified futures trading system with around 45 instruments, eight trading rules drawn from four different styles, and 30 trading rule variations. In a 35 year back-test, conservatively fitted with out of sample bootstrapping, it has a Sharpe ratio (SR) of around 1.0 after costs, but the highest volatility target I’d advocate using 145 Systematic Trading for it is 37%, rather than the 100% suggested by the Kelly criterion and the back-tested SR.106 Why such a conservative number – am I a wimp? There are several reasons for my caution. Firstly, it’s unlikely a back-tested SR will be achieved in the future. On average realised performance is never as good as in back-tests. This is because it’s very easy to over-fit if you ignore the advice in chapters three and four. Additionally it’s difficult with ideas first testing to avoid using only trading rules that you already know would have worked.

In the rest of this chapter I will focus on how forecasts and changes in price volatility influence trading speed, since these can readily be adjusted and have the biggest effect. Estimating the number of round trips You can now calculate the expected costs of trading each instrument, given the units of turnover of the trading system and the standardised costs. But how do you find the turnover? There are three ways to estimate it. Sophisticated backtest If you have access to decent back-testing software, or can write your own, then you can include a function that estimates turnover directly. Simple back-test More rudimentary back-testing tools can give you an estimate of the average number of instrument blocks traded each year and the average absolute number of blocks you held. You can then calculate turnover as: Average number of blocks traded per year ÷ (2 × average absolute number of blocks held) Rule of thumb The third alternative is to use rules of thumb.


pages: 321

Finding Alphas: A Quantitative Approach to Building Trading Strategies by Igor Tulchinsky

algorithmic trading, asset allocation, automated trading system, backtesting, barriers to entry, business cycle, buy and hold, capital asset pricing model, constrained optimization, corporate governance, correlation coefficient, credit crunch, Credit Default Swap, discounted cash flows, discrete time, diversification, diversified portfolio, Eugene Fama: efficient market hypothesis, financial intermediation, Flash crash, implied volatility, index arbitrage, index fund, intangible asset, iterative process, Long Term Capital Management, loss aversion, market design, market microstructure, merger arbitrage, natural language processing, passive investing, pattern recognition, performance metric, popular capitalism, prediction markets, price discovery process, profit motive, quantitative trading / quantitative finance, random walk, Renaissance Technologies, risk tolerance, risk-adjusted returns, risk/return, selection bias, sentiment analysis, shareholder value, Sharpe ratio, short selling, Silicon Valley, speech recognition, statistical arbitrage, statistical model, stochastic process, survivorship bias, systematic trading, text mining, transaction costs, Vanguard fund, yield curve

An automated system lacks this advantage, so one may want to consider quantitative methods for detecting when the backtesting period is too long, such as splitting the backtest and checking consistency across periods. At the cost of a modest increase in computational complexity, it may be possible to update certain parameters dynamically within the alphas rather than fitting them as part of the search, so that a longer backtesting period remains relevant. Alphas from Automated Search117 Another concern with a longer backtesting period, especially in a large-scale search, is the higher computational load. In an iterative search, an incremental backtest period is a useful trick to take advantage of a longer backtest period without using excessive resources. For example, one starts the first round of the search with a backtest period from date M to date N. In the next round, traversing the finer grid of a smaller search space, the period of M−0.5year to N+0.5year is used.

. •• Explanatory models, which analyze what happened historically. In our working environment, simulation means backtesting: the process of applying a specific model to unbiased historical data under certain market assumptions and risk constraints to test its simulated historical performance. The implicit assumption of backtesting is that if the idea worked in history, then it is more likely to work in the future. A model will generally not be considered unless it has been validated in simulation. Backtesting results are used for preselecting models, comparing different models, and judging the potential value of such alphas. These results can be assessed using various measures, such as returns, Sharpe ratio (return over risk), turnover (trading frequency), and correlation with other alphas. Good backtesting results are not sufficient for a profitable strategy, however; many other factors will affect investment performance.

The longer the out-of-sample period, the higher the confidence in the model but the less in-sample data available to calibrate the model. The optimal ratio of in-sample to out-of-sample data in model building depends on the model’s complexity. LOOKING BACK Backtesting involves looking back in time to evaluate how a forecast or trading strategy would have performed historically. Although backtesting is invaluable (providing a window into both the markets and how the alpha would have performed), there are two important points to remember: •• History does not repeat itself exactly. So while an alpha idea may look great in a backtest, there’s no guarantee (only a level of confidence) it will continue to work in the future. This is because of the perverse power of computation and the ability of creative modelers 14 Finding Alphas to miss the forest for the trees.


Evidence-Based Technical Analysis: Applying the Scientific Method and Statistical Inference to Trading Signals by David Aronson

Albert Einstein, Andrew Wiles, asset allocation, availability heuristic, backtesting, Black Swan, butter production in bangladesh, buy and hold, capital asset pricing model, cognitive dissonance, compound rate of return, computerized trading, Daniel Kahneman / Amos Tversky, distributed generation, Elliott wave, en.wikipedia.org, feminist movement, hindsight bias, index fund, invention of the telescope, invisible hand, Long Term Capital Management, mental accounting, meta analysis, meta-analysis, p-value, pattern recognition, Paul Samuelson, Ponzi scheme, price anchoring, price stability, quantitative trading / quantitative finance, Ralph Nelson Elliott, random walk, retrograde motion, revision control, risk tolerance, risk-adjusted returns, riskless arbitrage, Robert Shiller, Robert Shiller, Sharpe ratio, short selling, source of truth, statistical model, stocks for the long run, systematic trading, the scientific method, transfer pricing, unbiased observer, yield curve, Yogi Berra

This is so because the performance of a rule can be profoundly affected by factors that have nothing to do with its predictive power. The Conjoint Effect of Position Bias and Market Trend on Back-Test Performance In reality, a rule’s back-tested performance is comprised of two independent components. One component is attributable to the rule’s predictive power, if it has any. This is the component of interest. The second, and unwanted, component of performance is the result of two factors that have nothing to do with the rule’s predictive power: (1) the rule’s long/short position bias, and (2) the market’s net trend during the back-test period. This undesirable component of performance can dramatically influence 24 METHODOLOGICAL, PSYCHOLOGICAL, PHILOSOPHICAL, STATISTICAL FOUNDATIONS back-test results and make rule evaluation difficult. It can cause a rule with no predictive power to generate a positive average return or it can cause a rule with genuine predictive power to produce a negative average return.

This is illustrated in Figure 6.4. In data mining, the back-test performance statistic plays a very different role than it does in single-rule back testing. In data mining, back- Cumulative Gains $ Expected Performance Observed Performance Back-Test +/Random Variation In-Sample Future Time FIGURE 6.4 Expected performance for single rule back test. 271 Data-Mining Bias: The Fool’s Gold of Objective TA tested performance serves as a selection criterion. That is to say, it is used to identify the best rule. The mean returns of all back-tested rules are compared and the one with the highest return is selected. This, too, is a perfectly legitimate use of the back test (observed) performance statistic. It is legitimate in the sense that the rule with the highest back-tested mean return is in fact the rule that is most likely to perform the best in the future.

Simply by knowing their historical position bias, 90 percent long for rule 1 and 60 percent for rule 2, and knowing the market’s average daily return over the back-test period, we would be able to compute the expected returns for rules with no predictive power having these position biases using the equation for the expected return already shown. The expected returns for each rule and would then be subtracted from each rule’s observed performance. Therefore, from rule 1’s backtested return, which was 7.31 percent, we would subtract 7.31 percent, giving a result of zero. The result properly reflects rule 1’s lack of predictive power. From rule 2’s return of 1.78 percent, we would subtract a value of 1.78 percent, also giving a value of zero, also revealing its lack of predictive power. The bottom line is this: by adjusting the back-tested (observed) performance by the expected return of a rule with no predictive power having an equivalent position bias, the deceptive component of performance can be removed.


pages: 443 words: 51,804

Handbook of Modeling High-Frequency Data in Finance by Frederi G. Viens, Maria C. Mariani, Ionut Florescu

algorithmic trading, asset allocation, automated trading system, backtesting, Black-Scholes formula, Brownian motion, business process, buy and hold, continuous integration, corporate governance, discrete time, distributed generation, fixed income, Flash crash, housing crisis, implied volatility, incomplete markets, linear programming, mandelbrot fractal, market friction, market microstructure, martingale, Menlo Park, p-value, pattern recognition, performance metric, principal–agent problem, random walk, risk tolerance, risk/return, short selling, statistical model, stochastic process, stochastic volatility, transaction costs, value at risk, volatility smile, Wiener process

Published 2012 by John Wiley & Sons, Inc. 421 422 Augmented log likelihood, 172 Autocorrelation, of GARCH filtering, 202 Autocorrelation function (ACF), 177, 221 for minute data, 202–203 Automated trading platforms, 235 Automated trading systems, 63–64, 68 Autoregressive conditional duration (ACD) model, 27–28 Autoregressive conditionally heteroskedastic (ARCH) models, 272 Average daily volume (ADV), 34 classification of equity based on, 45 Average estimator, 279 BAC data series, DFA and Hurst methods applied to, 155 Backtest, evaluating results of, 192 Backtest algorithm, 189 Backtest failure ratio, 192 Backtesting, 188–203 Backtest null hypothesis, 202 Backtest results, using GARCH, 204–205 Backtest result tables, 192–195, 199–200 Backtest variant, 195–196 Balanced capital structure, 59 Balanced scorecards (BSCs), 48, 52–53, 69. See also Board balanced scorecards (BSCs); BSC entries; Enterprise BSC; Executive BSC Ball solution, 391–399 Banach spaces, 349, 350, 351, 386, 387–388, 389 Bandwidth choices, 269 Barany, Ernest, xiii, 119, 327 Bartlett-type kernels, 261, 263 Base learner, 48 Bear Stearns crash, high-frequency data corresponding to, 121, 131–132 Bear Stearns crash week, high-frequency data from, 148–160 Beccar Varela, Maria Pia, xiii, 119, 327 Bernoulli LRT, 191.

As a result, the number of tests that can be done on a fixed amount of daily data will shrink substantially when the time horizon increases. To extract more information on the violations, we can implement the backtest algorithm n times, each with a different starting point in the time index (i.e., t = C, C + 1, . . . , C + n − 1). Each of the n backtests will contain the same total number of tests Y , but a different number of violations y1 , . . . , yn . 7.5.4 n-DAY HORIZON We list the actual violation ratios and the corresponding p-values of the likelihood ratio test (LRT) in Tables 7.4–7.13. All VaR backtesting is based on S&P500 daily close prices from January 1, 1991, to December 31, 2009. A thousand samples are used to calibrate each skewed t distribution. Depending on the length of the time horizon, the total number of backtests ranges between 500 and 1900. For an n-day horizon, we have n groups of results representing different starting points in the time index.

Setting the confidence level of the LRT to be 95% implies that even if the model is perfect, we will still have a 5% chance of observing LRT failures (i.e., type I errors). Since the failure ratio of the backtests 5/216≈2.3% is much lower than 5%, we consider our model performance is satisfactory. TABLE 7.4 Backtest Results: Two Days Violation Ratio q Group 1 Group 2 12 Since 0.05 0.059 0.050 0.025 0.026 0.027 0.01 0.010 0.010 p-Value 0.005 0.005 0.007 0.05 0.076 0.975 0.025 0.809 0.596 0.01 0.827 0.827 0.005 0.864 0.277 the sum of i.i.d. Bernoulli r.v. is a binomial r.v., another alternative is a standard two-sided binomial test, as described by Casella and Berger (2002). 13 We reject the null hypothesis when the p-value is <0.05. 193 7.5 Backtesting TABLE 7.5 Backtest Results: Three Days Violation Ratio q Group 1 Group 2 Group 3 0.05 0.057 0.054 0.052 0.025 0.027 0.032 0.025 0.01 0.012 0.013 0.012 p-Value 0.005 0.008 0.009 0.007 0.05 0.260 0.532 0.807 0.025 0.663 0.143 0.921 0.01 0.513 0.239 0.513 0.005 0.175 0.091 0.313 TABLE 7.6 Backtest Results: Four Days Violation Ratio q Group 1 Group 2 Group 3 Group 4 0.05 0.055 0.041 0.052 0.053 0.025 0.025 0.030 0.023 0.030 0.01 0.012 0.013 0.013 0.020 p-Value 0.005 0.004 0.006 0.005 0.008 0.05 0.490 0.202 0.801 0.690 0.025 0.942 0.378 0.728 0.378 0.05 0.514 0.722 0.630 0.849 0.400 0.025 0.828 0.804 0.485 0.804 0.048 0.01 0.624 0.426 0.426 0.006 0.005 0.730 0.574 0.902 0.170 TABLE 7.7 Backtest Results: Five Days Violation Ratio q Group 1 Group 2 Group 3 Group 4 Group 5 0.05 0.045 0.053 0.046 0.052 0.057 0.025 0.024 0.026 0.029 0.026 0.037 0.01 0.007 0.012 0.011 0.016 0.012 p-Value 0.005 0.005 0.008 0.008 0.011 0.005 0.01 0.317 0.612 0.876 0.136 0.612 0.005 0.913 0.293 0.293 0.059 0.913 TABLE 7.8 Backtest Results: Six Days Violation Ratio q Group 1 Group 2 Group 3 Group 4 Group 5 Group 6 0.05 0.052 0.049 0.056 0.048 0.057 0.051 0.025 0.019 0.022 0.025 0.021 0.030 0.032 0.01 0.011 0.013 0.011 0.014 0.011 0.011 p-Value 0.005 0.006 0.008 0.008 0.011 0.006 0.006 0.05 0.785 0.927 0.529 0.782 0.421 0.927 0.025 0.318 0.649 0.949 0.470 0.422 0.298 0.01 0.783 0.514 0.783 0.310 0.783 0.783 0.005 0.645 0.336 0.336 0.061 0.645 0.645 194 CHAPTER 7 Risk Forecasting with Multiple Timescales TABLE 7.9 Backtest Results: Seven Days Violation Ratio q Group 1 Group 2 Group 3 Group 4 Group 5 Group 6 Group 7 0.05 0.043 0.044 0.041 0.044 0.039 0.054 0.059 0.025 0.024 0.017 0.033 0.024 0.024 0.020 0.030 0.01 0.011 0.013 0.013 0.013 0.019 0.011 0.006 p-Value 0.005 0.007 0.006 0.004 0.006 0.013 0.007 0.006 0.05 0.418 0.546 0.308 0.546 0.218 0.696 0.337 0.025 0.890 0.187 0.238 0.890 0.890 0.477 0.503 0.01 0.799 0.508 0.508 0.508 0.075 0.799 0.257 0.005 0.459 0.857 0.654 0.857 0.029 0.459 0.857 TABLE 7.10 Backtest Results: Eight Days Violation Ratio q Group 1 Group 2 Group 3 Group 4 Group 5 Group 6 Group 7 Group 8 0.05 0.034 0.055 0.047 0.038 0.051 0.053 0.044 0.049 0.025 0.021 0.023 0.028 0.017 0.030 0.017 0.023 0.023 0.01 0.013 0.015 0.008 0.015 0.008 0.008 0.015 0.006 p-Value 0.005 0.006 0.006 0.004 0.006 0.004 0.006 0.008 0.004 0.05 0.089 0.618 0.733 0.218 0.933 0.770 0.576 0.899 0.025 0.586 0.811 0.728 0.235 0.528 0.235 0.811 0.811 0.05 0.158 0.649 0.492 0.096 0.822 0.511 0.824 0.824 0.355 0.025 0.127 0.875 0.647 0.631 0.875 0.877 0.451 0.875 0.245 0.01 0.570 0.325 0.732 0.325 0.732 0.732 0.325 0.394 0.005 0.689 0.689 0.809 0.689 0.809 0.689 0.331 0.809 TABLE 7.11 Backtest Results: Nine Days Violation Ratio q Group 1 Group 2 Group 3 Group 4 Group 5 Group 6 Group 7 Group 8 Group 9 0.05 0.036 0.045 0.043 0.033 0.048 0.057 0.052 0.052 0.040 0.025 0.014 0.024 0.029 0.021 0.024 0.026 0.031 0.024 0.017 0.01 0.007 0.012 0.010 0.010 0.007 0.014 0.017 0.014 0.010 p-Value 0.005 0.005 0.005 0.005 0.002 0.005 0.010 0.014 0.010 0.002 0.01 0.535 0.703 0.921 0.921 0.535 0.407 0.210 0.407 0.921 0.005 0.944 0.944 0.944 0.396 0.944 0.243 0.028 0.243 0.396 195 7.5 Backtesting TABLE 7.12 Backtest Results: 10 Days Violation Ratio q Group 1 Group 2 Group 3 Group 4 Group 5 Group 6 Group 7 Group 8 Group 9 Group 10 0.05 0.040 0.048 0.048 0.040 0.034 0.032 0.040 0.048 0.042 0.053 0.025 0.016 0.026 0.024 0.019 0.016 0.016 0.016 0.026 0.026 0.032 0.01 0.011 0.016 0.021 0.013 0.005 0.013 0.008 0.011 0.016 0.011 p-Value 0.005 0.008 0.008 0.016 0.008 0.005 0.008 0.008 0.005 0.008 0.005 0.05 0.340 0.831 0.831 0.340 0.141 0.082 0.340 0.831 0.483 0.797 0.025 0.224 0.858 0.881 0.398 0.224 0.224 0.224 0.858 0.858 0.420 0.01 0.910 0.290 0.058 0.548 0.312 0.548 0.676 0.910 0.290 0.910 0.005 0.456 0.456 0.017 0.456 0.937 0.456 0.456 0.937 0.456 0.937 TABLE 7.13 Backtest Results: 15 Days Violation Ratio q Group 1 Group 2 Group 3 Group 4 Group 5 Group 6 Group 7 Group 8 Group 9 Group 10 Group 11 Group 12 Group 13 Group 14 Group 15 0.05 0.046 0.049 0.044 0.058 0.054 0.054 0.054 0.060 0.056 0.056 0.061 0.065 0.056 0.058 0.058 0.025 0.036 0.034 0.029 0.031 0.027 0.031 0.041 0.034 0.037 0.029 0.041 0.032 0.032 0.031 0.029 0.01 0.015 0.014 0.020 0.012 0.015 0.017 0.020 0.019 0.017 0.015 0.014 0.014 0.014 0.014 0.015 p-Value 0.005 0.010 0.010 0.010 0.007 0.012 0.012 0.007 0.007 0.012 0.012 0.009 0.007 0.009 0.012 0.010 0.05 0.645 0.940 0.512 0.395 0.627 0.627 0.627 0.303 0.504 0.504 0.227 0.119 0.504 0.395 0.395 0.025 0.118 0.184 0.553 0.399 0.735 0.399 0.024 0.184 0.072 0.553 0.024 0.277 0.277 0.399 0.553 0.01 0.230 0.405 0.026 0.652 0.230 0.121 0.026 0.058 0.121 0.230 0.405 0.405 0.405 0.405 0.230 0.005 0.117 0.117 0.117 0.557 0.044 0.044 0.557 0.557 0.044 0.044 0.274 0.557 0.274 0.044 0.117 Still, the high–low frequency method does have its limits.


pages: 504 words: 139,137

Efficiently Inefficient: How Smart Money Invests and Market Prices Are Determined by Lasse Heje Pedersen

activist fund / activist shareholder / activist investor, algorithmic trading, Andrei Shleifer, asset allocation, backtesting, bank run, banking crisis, barriers to entry, Black-Scholes formula, Brownian motion, business cycle, buy and hold, buy low sell high, capital asset pricing model, commodity trading advisor, conceptual framework, corporate governance, credit crunch, Credit Default Swap, currency peg, David Ricardo: comparative advantage, declining real wages, discounted cash flows, diversification, diversified portfolio, Emanuel Derman, equity premium, Eugene Fama: efficient market hypothesis, fixed income, Flash crash, floating exchange rates, frictionless, frictionless market, Gordon Gekko, implied volatility, index arbitrage, index fund, interest rate swap, late capitalism, law of one price, Long Term Capital Management, margin call, market clearing, market design, market friction, merger arbitrage, money market fund, mortgage debt, Myron Scholes, New Journalism, paper trading, passive investing, price discovery process, price stability, purchasing power parity, quantitative easing, quantitative trading / quantitative finance, random walk, Renaissance Technologies, Richard Thaler, risk-adjusted returns, risk/return, Robert Shiller, Robert Shiller, selection bias, shareholder value, Sharpe ratio, short selling, sovereign wealth fund, statistical arbitrage, statistical model, stocks for the long run, stocks for the long term, survivorship bias, systematic trading, technology bubble, time value of money, total factor productivity, transaction costs, value at risk, Vanguard fund, yield curve, zero-coupon bond

You should always keep in mind that the goal is to find a strategy that works in the future and not to have the best possible backtest. You should strive for a robust process that works even if you adjust it a little. Adjusting Backtests for Trading Costs Transaction costs reduce the returns of a trading strategy. A backtest is therefore much more realistic if it accounts for transaction costs. To adjust a backtest, we first need to have an estimate of the expected transaction costs for all securities and trading sizes. You can often obtain such estimates from brokers, or you can estimate the expected transaction costs, as discussed in section 5.3. Given these expected transaction costs, we can adjust the backtest in the following simple way. Each time a trade takes place in our backtest, we compute the expected transaction cost and subtract this cost from the backtest returns. For instance, if we have a monthly portfolio rebalance rule, then each month of the backtest, we do the following: • Compute the return on the portfolio, • Compute the new security positions and the implied trades, • Compute the expected trading costs for every security and add them up, and • Subtract the total expected trading cost from the portfolio return.

In summary, you could find trading strategies by getting an edge in trading and financing illiquid securities or by trading against demand pressures. 3.3. HOW TO BACKTEST A TRADING STRATEGY Once you have a trading idea, backtesting it can be a powerful tool. To backtest a trading strategy means to simulate how it would have done historically. Of course, historical performance does not necessarily predict future performance, but a backtest is very useful nevertheless. For instance, many trading ideas are simply born bad, and this can be discovered through a backtest. For instance, suppose you have a trading idea, simulate how it would have performed over the past 20 years, and find that the strategy would never have worked in the past. Would you want to know this before you start trading? Surely, yes. Knowing this, you would be unlikely to put the trade on, and not doing so could save you a lot of money. A backtest can teach you about the risk of a strategy, and it can give you ideas about how to improve it.

Furthermore, some version will have worked the best in the past, perhaps just by chance, but, if this is by chance, it probably will not work well in the future, when you are actually trading on it. Or you tried the backtest because you heard someone made money on this trade, but, in this case, the backtest is biased to look good (your friend already told you!), even if this is by pure chance. These unavoidable biases mean that we should discount backtest returns and place more weight on realized returns. Furthermore, we should discount backtests more if they have more inputs and have been tweaked or optimized more. While unavoidable biases should simply affect how we should regard backtests, there are many avoidable biases that experienced traders and researchers fight hard to eliminate. For one, it is important to have an unbiased universe of securities.


Risk Management in Trading by Davis Edwards

asset allocation, asset-backed security, backtesting, Black-Scholes formula, Brownian motion, business cycle, computerized trading, correlation coefficient, Credit Default Swap, discrete time, diversified portfolio, fixed income, implied volatility, intangible asset, interest rate swap, iterative process, John Meriwether, London Whale, Long Term Capital Management, margin call, Myron Scholes, Nick Leeson, p-value, paper trading, pattern recognition, random walk, risk tolerance, risk/return, selection bias, shareholder value, Sharpe ratio, short selling, statistical arbitrage, statistical model, stochastic process, systematic trading, time value of money, transaction costs, value at risk, Wiener process, zero-coupon bond

Given enough attempts, a computer optimizer could identify which values resulted in the most profitable trades. However, this does not indicate that the approach will work in the future. Backtesting and Trade Forensics 99 KEY CONCEPT: RISK AND RETURN Risky investments are likely to have periods where they are both more profitable and have greater losses than safer investments. As a result, if trading tests only look at profitability, the top-performing investment in any given historical period might be an extremely risky investment that just happened to get lucky. Historical Backtesting Historical backtesting is the process of using financial data for prior periods to develop and examine potential trading strategies. Inherent in the backtesting process is a bias to selecting high-risk investments. This is due to the fact that traders will keep testing ideas on the same set of data until they find an idea that looks profitable.

In this kind of analysis, VAR thresholds (usually indicated by solid black lines) are compared to actual results (gray marks). (See Figure 6.9, VAR Backtest.) This data can be analyzed to see if the number of samples outside the VAR estimate matches the confidence level of the VAR calculation. For example, in the NYMEX WTI graphic, 5.58 percent of the days had losses greater than the VAR threshold for the day. Given the 2200 daily observations in the backtest, this indicates that losses are more common than the 5 percent of samples estimated by VAR. NYMEX WTI Crude Oil FIGURE 6.9 01 3 01 2 /2 28 6/ 28 /2 01 1 6/ 6/ 28 /2 01 0 /2 9 /2 28 6/ 6/ 28 00 00 6/ 28 /2 /2 0 28 6/ /2 28 6/ VAR Backtest 8 07 6 00 00 /2 6/ 28 6/ 28 /2 00 5 4 20.00% 15.00% 10.00% 5.00% 0.00% −5.00% −10.00% −15.00% −20.00% 171 Value-at-Risk THE MISUSE OF VAR A major problem with VAR is that VAR gets used for multiple purposes.

OVERVIEW OF BOOK This book describes how risk management techniques are used by professional traders to reduce risk and maximize profits. The focus of the book is how traders working at hedge funds or on investment bank proprietary trading desks use risk management techniques to improve their profitability and keep themselves in business. However, these techniques can be applied to almost any trading or investment group. This book focuses on six major activities that are part of managing trading businesses. 1. Backtesting and Trade Forensics. Backtesting is a disciplined approach to testing trading ideas before making bets with actual money. Trade forensics 1 2 RISK MANAGEMENT IN TRADING 2. 3. 4. 5. 6. is a post‐mortem analysis that identifies how well a trade is tracking pre‐ trade predictions and if markets have changed since the trade was initiated. Calculating Profits and Losses. Once a trade has been made, traders have to calculate the daily profits and losses.


pages: 327 words: 91,351

Traders at Work: How the World's Most Successful Traders Make Their Living in the Markets by Tim Bourquin, Nicholas Mango

algorithmic trading, automated trading system, backtesting, buy and hold, commodity trading advisor, Credit Default Swap, Elliott wave, fixed income, Long Term Capital Management, paper trading, pattern recognition, prediction markets, risk tolerance, Small Order Execution System, statistical arbitrage, The Wisdom of Crowds, transaction costs, zero-sum game

With regards to breakout strategies, when a market breaks out of a range—and it could be a five-day range or a five-hundred-day range—you enter your position in the direction of that breakout, and you fine-tune your stop based on your testing. Bourquin: Are you constantly backtesting markets to find where certain trends work best? Or have you backtested trades in the past so that you now know what approach works best in a given market and can apply it going forward? German: Initially, trend following involves a lot of backtesting and thousands and thousands of tests, including millions of iterations on all kinds of different markets, with all kinds of different trend-following strategies and approaches to stops and profit targets. There is an initial period where you are backtesting for ten hours a day, but then you get into a rhythm where you determine what works, based on your backtesting. I did a great deal of backtesting over a couple of years, which solidified the markets that I wanted to trade, the program that I wanted to follow, and what does and does not work for me.

Now, every time we experience a drawdown or every time I want to question or test myself, I will do some backtesting. But at the end of the day, I always go back to my original set of tests, and that is what I have been trading off of for years. Bourquin: Have you had to change your strategy at all? It sounds like you came across trend-following strategies that work for you, and they have ­continued to do well for years. Do you think that’s the case with most ­backtested strategies? German: Whether or not a backtested strategy does well over the long term depends on the way in which that particular strategy was backtested. A backtested strategy might look great on paper but not make any money in real life. That said, I will not trade anything that hasn’t been tested. Maybe I’m just wired this way now, because I have been backtesting strategies for so long, and it’s kind of engrained in me, but when somebody says they are doing this or that in the market, I always ask them, “How do you know your strategy works?

As a trade moves higher, the stop trails behind it, and there are dozens of different trailing stops you can use. That’s what I use to get out of a profitable trade. Bourquin: Can I ask what software you use in your trend following and your backtesting? German: I use a bunch of different software. I don’t want to go through all the different software that I use, but I can say that there are several inexpensive options for doing basic backtesting. It gets tricky, however, when you start to think about the cleanliness of your data and how to fuse different contract months and rollover periods for longer-term backtesting. That said, when I started, TradeStation was the easiest to learn. That’s really all you need to get started. Bourquin: Once you’re into a green trade and the trend continues to rise, do you allow for scaling in more or building up size in an existing trade?


pages: 354 words: 26,550

High-Frequency Trading: A Practical Guide to Algorithmic Strategies and Trading Systems by Irene Aldridge

algorithmic trading, asset allocation, asset-backed security, automated trading system, backtesting, Black Swan, Brownian motion, business cycle, business process, buy and hold, capital asset pricing model, centralized clearinghouse, collapse of Lehman Brothers, collateralized debt obligation, collective bargaining, computerized trading, diversification, equity premium, fault tolerance, financial intermediation, fixed income, high net worth, implied volatility, index arbitrage, information asymmetry, interest rate swap, inventory management, law of one price, Long Term Capital Management, Louis Bachelier, margin call, market friction, market microstructure, martingale, Myron Scholes, New Journalism, p-value, paper trading, performance metric, profit motive, purchasing power parity, quantitative trading / quantitative finance, random walk, Renaissance Technologies, risk tolerance, risk-adjusted returns, risk/return, Sharpe ratio, short selling, Small Order Execution System, statistical arbitrage, statistical model, stochastic process, stochastic volatility, systematic trading, trade route, transaction costs, value at risk, yield curve, zero-sum game

The same code should be used in both, and the back-testing engine should run on tick-by-tick data to reenact past market conditions. The main functionality code from the back-testing modules should then be reused in the live system. To ensure statistically significant inferences, the model “training” period T should be sufficiently large; according to the central limit theorem (CLT), 30 observations is the bare minimum for any statistical significance, and 200 observations is considered a reasonable number. Given strong seasonality in intra-day data (recurrent price and volatility changes at specific times throughout the day), benchmark high-frequency models are backtested on several years of tick-by-tick data. The main difference between the live trading model and the back-test model should be the origin of the quote data; the back-test system includes a historical quote-streaming module that reads historical tick data from archives and feeds it sequentially to the module that has the main functionality.

Aldridge (2009a) develops a quantitative methodology of applying hit and miss ratio analyses to enhance the accuracy of predictions of trading models. CONCLUSION Various back-test procedures illuminate different aspects of strategy performance on historical data and are performed before the trading strategy is applied to live capital. Observing parameters of strategy performance in back tests allows high-frequency managers to identify the best strategies to include in their portfolio. The same parameters allow modelers to tweak their strategies to obtain even more robust models. Care should be taken to avoid “overfitting”—using the same data sample in repeated testing of the model. CHAPTER 16 Implementing High-Frequency Trading Systems nce high-frequency trading models have been identified, the models are back-tested to ensure their viability. The back-testing software should be a “paper”-based prototype of the eventual live system.

The main difference between the live trading model and the back-test model should be the origin of the quote data; the back-test system includes a historical quote-streaming module that reads historical tick data from archives and feeds it sequentially to the module that has the main functionality. In the live trading system, a different quote module receives real-time tick data originating at the broker-dealers. Except for differences in receiving quotes, both live and back-test systems should be identical; they can be built simultaneously and, ideally, can use the same code samples for core functionality. This chapter reviews O 233 234 HIGH-FREQUENCY TRADING the systems implementation process under the assumption that both backtesting and live engines are built and tested in parallel. MODEL DEVELOPMENT LIFE CYCLE High-frequency trading systems, by their nature, require rapid hesitationfree decision making and execution.


pages: 263 words: 75,455

Quantitative Value: A Practitioner's Guide to Automating Intelligent Investment and Eliminating Behavioral Errors by Wesley R. Gray, Tobias E. Carlisle

activist fund / activist shareholder / activist investor, Albert Einstein, Andrei Shleifer, asset allocation, Atul Gawande, backtesting, beat the dealer, Black Swan, business cycle, butter production in bangladesh, buy and hold, capital asset pricing model, Checklist Manifesto, cognitive bias, compound rate of return, corporate governance, correlation coefficient, credit crunch, Daniel Kahneman / Amos Tversky, discounted cash flows, Edward Thorp, Eugene Fama: efficient market hypothesis, forensic accounting, hindsight bias, intangible asset, Louis Bachelier, p-value, passive investing, performance metric, quantitative hedge fund, random walk, Richard Thaler, risk-adjusted returns, Robert Shiller, Robert Shiller, shareholder value, Sharpe ratio, short selling, statistical model, survivorship bias, systematic trading, The Myth of the Rational Market, time value of money, transaction costs

If we test a strategy that rebalances annually on January 1, we introduce look-ahead bias if we use the preceding year's annual results because they would not have been available on January 1 of this year. Companies often restate financial statements after the fact, and this can introduce another form of look-ahead bias that can have a huge impact on back-tested results. Marcus Bogue and Morris Bailey in their white paper, “The Advantages of Using as First Reported Data with Current Compustat Data for Historical Research,”16 highlight how restated financial statements impact back-test results for a simple price-to earnings ratio strategy. If the back-test fails to account for the difference in financial statement data as the data are first reported and then as they are subsequently restated, the back-test results vary dramatically. For example, from June 1987 through June 2001, failing to account for look-ahead bias caused by restatement of financial results led to an overstatement of returns achievable with the price-to-earnings ratio strategy by an incredible 28 percent.

In this chapter, we discuss our philosophy for conducting investment simulations, and survey the potential pitfalls in interpreting back-test results. We cast a suspicious eye on back-tested, and real, historical results, closely scrutinizing the steps we can take to ensure that results are genuine, and replicable. In Chapter 11, we study the best way to combine the research we've already considered into a cohesive strategy. We examine the Magic Formula and the F_SCORE to see if we can find a better structure for our valuation model. Our process leads us to identify some potential structural issues with the Magic Formula. In Chapter 12, the final chapter, we back-test the quantitative value model we created in Chapter 11. We take a comprehensive look at its raw results and its risk- and opportunity-cost-adjusted performance.

Part Five sets out a variety of signals sent by other market participants. There we look at the impact of buybacks, insider purchases, short selling, and buying and selling from institutional investment managers like activists and other fund managers. Finally, in Part Six we build and test our quantitative value model. We study the best way to combine the research we've considered into a cohesive strategy, and then back-test the resulting quantitative value model. CHAPTER 1 The Paradox of Dumb Money “As they say in poker, ‘If you've been in the game 30 minutes and you don't know who the patsy is, you're the patsy.'” —Warren Buffett (1987) In the summer of 1968, Ed Thorp, a young math professor at the University of California, Irvine (UCI), and author of Beat the Market: A Scientific Stock Market System (1967), accepted an invitation to spend the afternoon playing bridge with Warren Buffett, the not-yet-famous “value” investor.


The Rise of Carry: The Dangerous Consequences of Volatility Suppression and the New Financial Order of Decaying Growth and Recurring Crisis by Tim Lee, Jamie Lee, Kevin Coldiron

active measures, Asian financial crisis, asset-backed security, backtesting, bank run, Bernie Madoff, Bretton Woods, business cycle, capital asset pricing model, Capital in the Twenty-First Century by Thomas Piketty, collapse of Lehman Brothers, collateralized debt obligation, Credit Default Swap, credit default swaps / collateralized debt obligations, cryptocurrency, debt deflation, distributed ledger, diversification, financial intermediation, Flash crash, global reserve currency, implied volatility, income inequality, inflation targeting, labor-force participation, Long Term Capital Management, Lyft, margin call, market bubble, money market fund, money: store of value / unit of account / medium of exchange, moral hazard, negative equity, Network effects, Ponzi scheme, purchasing power parity, quantitative easing, random walk, rent-seeking, reserve currency, rising living standards, risk/return, sharing economy, short selling, sovereign wealth fund, Uber and Lyft, uber lyft, yield curve

Academic researchers started off using statistical analyses of exchange rates and, over time, migrated to examining returns to hypothetical portfolios that might be closer to actual practitioner experience. Since we are focused on the practical consequences of the growth of carry, our study of historical returns aims to be as realistic as possible while recognizing that no backtest can accurately replicate history. We therefore base our analysis on a backtest of a hypothetical currency carry portfolio built using simple but realistic rules that could easily be replicated in practice. Details of how we constructed our backtest are provided in the box (“Currency Carry Backtest”) at the end of this chapter. Our backtest allows us to highlight the main features of the currency carry trade, features that other types of carry can be expected to share. Our currency carry portfolio is constructed to be long-short. We can think of a strategy that borrows money in the short currencies and deposits money in the long currencies.

However, those recent drawdowns have been associated with much bigger spikes in stock market volatility and much worse returns.6 One possible explanation for this might be the inclusion of emerging markets in our backtest. We start off with 10 developed market currencies and add emerging market currencies as their data become available, finishing with 29 in our final portfolio. By increasing the number of positions in the portfolio, the strategy becomes more diverse, which could lead to fewer drawdowns. Second, since emerging market currencies often have higher yields, the carry strategy will tend to be long a basket of emerging currencies and short a basket of developed market currencies. Emerging currencies have a higher sensitivity to global stock markets, which could lead to more recent drawdowns being associated with big spikes in volatility and drops in the S&P 500. However, the backtest that is restricted only to developed market currencies shows the exact same pattern.

CARRY AND ITS PROFITABILITY AS AN INVESTMENT STRATEGY 61 Australia and New Zealand have continued to have consistently higher interest rates than most other developed countries, at least up until 2018.11 In fact, if these two countries are removed from the sample, we see that the spread between highest and lowest rates has been mostly below 3 percent since the crisis and at some points was even below 2 percent. The opportunity for profit in developed country currency carry has shrunk considerably over the last 40 years and is at an all-time low in recent years. We can see this shrinking opportunity set directly by looking at the actual carry a backtest portfolio earns. The carry is simply the interest rate the portfolio earns on its long positions less the implied cost of borrowing on the short positions. In Figure 4.6 we present this for a portfolio that goes long $1 and short $1 for each dollar of capital. In contrast, our standard backtest employs five times this exposure, so the carry would be multiplied by five. But we choose this presentation to keep the units on a magnitude that is familiar and also more directly comparable to the preceding graph on the raw difference between maximum and minimum interest rates.


pages: 483 words: 141,836

Red-Blooded Risk: The Secret History of Wall Street by Aaron Brown, Eric Kim

activist fund / activist shareholder / activist investor, Albert Einstein, algorithmic trading, Asian financial crisis, Atul Gawande, backtesting, Basel III, Bayesian statistics, beat the dealer, Benoit Mandelbrot, Bernie Madoff, Black Swan, business cycle, capital asset pricing model, central bank independence, Checklist Manifesto, corporate governance, creative destruction, credit crunch, Credit Default Swap, disintermediation, distributed generation, diversification, diversified portfolio, Edward Thorp, Emanuel Derman, Eugene Fama: efficient market hypothesis, experimental subject, financial innovation, illegal immigration, implied volatility, index fund, Long Term Capital Management, loss aversion, margin call, market clearing, market fundamentalism, market microstructure, money market fund, money: store of value / unit of account / medium of exchange, moral hazard, Myron Scholes, natural language processing, open economy, Pierre-Simon Laplace, pre–internet, quantitative trading / quantitative finance, random walk, Richard Thaler, risk tolerance, risk-adjusted returns, risk/return, road to serfdom, Robert Shiller, Robert Shiller, shareholder value, Sharpe ratio, special drawing rights, statistical arbitrage, stochastic volatility, stocks for the long run, The Myth of the Rational Market, Thomas Bayes, too big to fail, transaction costs, value at risk, yield curve

Since a constant VaR estimate gives no information at all, and your estimate is worse than that, it’s clear why historical simulation VaR is not a VaR. Nevertheless, historical simulation VaR has become the most popular VaR number for risk reporting and regulatory purposes. Why? Because it’s easy to compute and its objective. It never surprises you; in fact, it’s always pretty close to yesterday’s value. The fact that it can’t pass a back-test doesn’t matter to people who never look at back-tests. The fact that it is actively misleading, telling you it’s safe when it’s dangerous and telling you it’s dangerous when it’s safe, doesn’t matter to people who only report and regulate. That only matters to people who manage risk. One fix that might occur to you is to set the VaR halfway between the fifth and sixth worst losses instead of between the 10th and 11th worst.

There are many problems with this approach, the biggest theoretical one is the average prediction made assuming an average value is exactly right, is exactly wrong. The biggest practical one is it never back-tests well. Moreover, variance-covariance VaR tells you even less about the tails than other VaRs. However, it was the form in which JPMorgan introduced VaR to the world. Some people still think of it as the definition of VaR. JPMorgan needed it to produce a report within 15 minutes of market close, using 1990-era technology and data systems. Variance-covariance was the only practical option. For Basel II, however, many flavors of VaR were easily available and we should have insisted on one that can pass back-test. But the most momentous decision, which seemed innocuous at the time, was to promise banks capital relief for spending all the money to create Basel II systems.

The statistical properties of market price changes, even in normal times, are erratic, evolving rapidly and unpredictably. We ended up stealing methods from the people who set sports betting point spreads, and adding stuff we made up as we went along. We had to delve deeply into the back office and study methods developed by controllers and auditors. Only after years of intensive, cooperative work did we develop VaRs that could pass rigorous statistical back-tests, and on which we were willing to bet with traders. The only way you got VaR accepted on the trading floor in the early 1990s was to bet; you can imagine what traders think of a risk manager who tells them how to run their billion-dollar portfolios but won’t risk $10,000 of his own money on his analysis. One major result is we learned how little we had understood about the risk in the well-behaved center of the probability distribution on normal trading days.


High-Frequency Trading by David Easley, Marcos López de Prado, Maureen O'Hara

algorithmic trading, asset allocation, backtesting, Brownian motion, capital asset pricing model, computer vision, continuous double auction, dark matter, discrete time, finite state, fixed income, Flash crash, High speed trading, index arbitrage, information asymmetry, interest rate swap, latency arbitrage, margin call, market design, market fragmentation, market fundamentalism, market microstructure, martingale, natural language processing, offshore financial centre, pattern recognition, price discovery process, price discrimination, price stability, quantitative trading / quantitative finance, random walk, Sharpe ratio, statistical arbitrage, statistical model, stochastic process, Tobin tax, transaction costs, two-sided market, yield curve

In the next section, we define information leakage and distinguish between good and bad information leakage. We next show how bad information leakage increases execution shortfall and introduce the BadMax approach for testing whether algos leak information to predators. In the BadMax approach, we pretend to be BadMax, a fictitious predator. As BadMax, we use historical data to back-test whether BadMax can construct profitable predatory strategies. We next describe the historical data we use in our back-tests and estimate the BadMax gross and net Alpha for several BadMax predatory strategies. In our back-tests we assume that BadMax can somehow 159 i i i i i i “Easley” — 2013/10/8 — 11:31 — page 160 — #180 i i HIGH-FREQUENCY TRADING Figure 8.1 Good and bad information leakage (hypothetical example) Price over AlphaMax execution horizon (bp) GoodMax sells XYZ 12 10 BadMax buys XYZ 0 Open 8 6 4 AlphaMaxbuy executions XYZ price if AlphaMax did not buy drive XYZ price up 2 0 10h00 0 11h00 0 12h00 0 13h00 0 14h00 0 15h00 AlphaMax buys 10K shares AlphaMax buys 10K shares AlphaMax buys 10K shares AlphaMax buys 10K shares AlphaMax buys 10K shares AlphaMax buys 10K shares 0 Close First Second Third Fourth Fifth Sixth tranche tranche tranche tranche tranche tranche identify GSET buy and sell algo executions.

We repeated the calculation for extreme clusters with more than 100 marketable GSET algo executions; in this case, GSET algo prints account for only 14% of all Tape prints. Even large clusters of marketable GSET algo executions, therefore, leave almost no footprint on the Tape. BadMax cannot extract GSET algo Alpha by analysing the Tape. CONCLUSION Based on the predatory strategies we have back-tested so far, GSET algo executions do not leak information that predators can profitably exploit by trading along. Our back-tests show the following. • Non-marketable and peg mid algo executions are associated with negative Alpha and therefore do not leak information that predators can exploit. • Marketable algo executions are associated with low positive Alpha but this Alpha does not cover the predators’ roundtrip cost; marketable executions, therefore, also do not leak information that predators can exploit. • Large clusters of marketable algo executions are associated with high positive Alpha, but, because these clusters are expensive to identify, predators can capture little if any Alpha.

Or a careless algo may increase liquidity impact without bad information leakage, by “walking up the book” (aggressively executing at progressively worse prices). In practice, therefore, it is futile to try to test whether an algo leaks information by comparing algo performance. We developed, instead, the BadMax approach. THE BADMAX APPROACH AND DATA SAMPLE In order to test whether algo executions leak information, we pretend to be a fictitious predator, BadMax. As BadMax, we use historical data on GSET algo executions to back-test the profitability of different information leakage scenarios. In one test, for example, we assume that BadMax can identify marketable GSET algo buy executions and test whether BadMax can use this information to generate profitable buy signals. Our tests assume BadMax has more information on GSET algo executions than any real-world predator is ever likely to have. A realworld predator, for example, will find it extremely difficult to identify GSET algo executions from publicly available data.


pages: 447 words: 104,258

Mathematics of the Financial Markets: Financial Instruments and Derivatives Modelling, Valuation and Risk Issues by Alain Ruttiens

algorithmic trading, asset allocation, asset-backed security, backtesting, banking crisis, Black Swan, Black-Scholes formula, Brownian motion, capital asset pricing model, collateralized debt obligation, correlation coefficient, Credit Default Swap, credit default swaps / collateralized debt obligations, delta neutral, discounted cash flows, discrete time, diversification, fixed income, implied volatility, interest rate derivative, interest rate swap, margin call, market microstructure, martingale, p-value, passive investing, quantitative trading / quantitative finance, random walk, risk/return, Satyajit Das, Sharpe ratio, short selling, statistical model, stochastic process, stochastic volatility, time value of money, transaction costs, value at risk, volatility smile, Wiener process, yield curve, zero-coupon bond

If the fund strategy is stable enough over time, the VaR calculation can be more accurate than through the traded instruments, first by avoiding the correlations problem, and second, if the composition of the fund portfolio is often modified. Backtesting of the VaR VaR methods presenting several weaknesses – starting with the adequate selection among several VaR methodologies – a VaR estimate needs to be tested a posteriori (“backtested”), to check to what extent it fits with actually observed losses larger than the VaR amount. As said by A. Brown11, “VaR is only as good as its backtest. When someone shows me a VaR number, I don't ask how it is computed. I ask to see the backtest”. The simple way to check it consists in counting the number N of times a portfolio presents losses that exceed the VaR number on a series of n successive VaR calculations. Depending on N/n, – called the “failure rate” – being higher or lower than the confidence level c associated with the VaR measures, the used VaR model is over- or underestimating the risk (the equality between N/n and c being obviously very unlikely).

Depending on N/n, – called the “failure rate” – being higher or lower than the confidence level c associated with the VaR measures, the used VaR model is over- or underestimating the risk (the equality between N/n and c being obviously very unlikely). The most popular backtest is the Kupiec's one, also called “POF (for Proportion of failures) test”. In this test, the losses exceeding the VaR number are considered to be independently and identically distributed, so that N follows a binomial distribution f(N) (that a loss may exceed or not the VaR number). For a confidence level c, the corresponding frequency of losses p is, repeating Eq. 14.5, (14.5) so that the failure rate N/n could be used as an unbiased measure of p, that would converge to 1 − c with n growing. f(N) is therefore described by the binomial distribution where CnN denotes the number of possible combinations of N failures and (1 − N) “non-failures” on a total of n events.

., 2006, John Wiley & Sons, Ltd, Chichester. 8 To make a more precise calculation, the width of the bins should be narrower than 0.5%, as used here. 9 The 2510 returns used for the example present a kurtosis of 7.81 and a skewness of −0.10. 10 In the initial basic example, the only risk factor was the price change of the exposure in S&P 500. 11 A. BROWN, Private Profits and Socialized Risk – Counterpoint: Capital Inadequacy, Global Association of Risk Professionals, June/July 08 issue. Cited by O. NIEPPOLA in his masters Thesis Backtesting Value-At-Risk Models, Helsinki School of Economics, 2008. 12 See any book of statistics. This ratio is a log ratio of the likelihood that p = , divided by the likelihood that p is not = . To verify this, one must use the values of a χ2 distribution, if p is asymptotically Gaussian, which is the case of a binomial distribution. 13 See Peter F. CHRISTOFFERSEN, Evaluating Interval Forecasts, International Economic Review, vol. 39, no. 4, November 1998. 14 See for example, L.


pages: 464 words: 117,495

The New Trading for a Living: Psychology, Discipline, Trading Tools and Systems, Risk Control, Trade Management by Alexander Elder

additive manufacturing, Atul Gawande, backtesting, Benoit Mandelbrot, buy and hold, buy low sell high, Checklist Manifesto, computerized trading, deliberate practice, diversification, Elliott wave, endowment effect, loss aversion, mandelbrot fractal, margin call, offshore financial centre, paper trading, Ponzi scheme, price stability, psychological pricing, quantitative easing, random walk, risk tolerance, short selling, South Sea Bubble, systematic trading, The Wisdom of Crowds, transaction costs, transfer pricing, traveling salesman, tulip mania, zero-sum game

I wrote this book to help both types of traders. ■ 38. System Testing, Paper Trading, and the Three Key Demands for Every Trade Before trading real money with a system, you need to test it, whether you developed it yourself or bought it from a vendor. This can be done in one of two ways. One is backtesting: apply your system's rules to a stretch of historical data, usually several years' worth. The other is forward-testing: trade small positions with real money. Serious traders begin with backtesting, and if its results look good, switch to forward-testing; if that works well, they gradually increase position size. Looking at printouts of historical results is a nice start, but don't let good numbers lull you into a false sense of security. The profit-loss ratio, the longest winning and losing streaks, the maximum drawdown, and other parameters may appear objective, but past results don't guarantee the system will hold up in the real world of trading.

You grit your teeth and put on another trade. Another loss. Your drawdown is deepening, and then the system flashes a new signal. Will you put on the next trade? Suddenly, an impressive printout looks like a very thin reed on which to hang the future of your account. There is a cottage industry of programmers who back-test systems for a fee. Some traders, too suspicious to disclose their “sure-fire methods,” spend months learning to use testing software. In the end, only one kind of backtesting prepares you to trade—manual testing. It is slow, time-consuming, and cannot be automated, but it's the only method that comes close to modeling real decision making. It consists of going through historical data one day at a time, scrupulously writing down your trading signals for the day ahead, and then clicking one bar forward and recording new signals and trades for the next day.

The Brain Myth Losers who suffer from the “brain myth” will tell you, “I lost because I didn't know trading secrets.” Many have a fantasy that successful traders have some secret knowledge. That fantasy helps support a lively market in advisory services and ready-made trading systems. A demoralized trader may whip out his credit card to buy access to “trading secrets.” He may send money to a charlatan for a $3,000 “can't miss,” backtested, computerized trading system. When that system self-destructs, he'll pull out his almost-maxed-out credit card again for a “scientific manual” that explains how he can stop losing and begin winning by contemplating the moon, the stars, or even Uranus. At an investment club we used to have in New York, I often ran into a famous financial astrologer. He often asked for free admission because he couldn't afford to pay a modest fee for the meeting and a meal.


pages: 120 words: 39,637

The Little Book That Still Beats the Market by Joel Greenblatt

backtesting, index fund, intangible asset, random walk, survivorship bias, transaction costs

Another question often asked during the last five years is whether the magic formula would work outside the United States. After I wrote the original edition, a number of Wall Street firms did conduct some research into this question (showing that the formula worked in pretty much all foreign markets tested), but we did not conduct any of our own backtests, for two reasons. First, much of the available historical stock market data from outside the United States is seriously flawed, and backtest results would not be reliable. It is helpful to know, however, that most historical studies over the last several decades involving classic (and less problematic to test) value characteristics, such as low price to earnings, low price to book value, and low price to sales have proved equally effective in both the United States and international markets.

Despite all its flaws, the formula certainly seems to have worked well over the long-term (fortunately, I received many nice e-mails about this, too). But over the last 10 years, the results from our test of roughly the largest 1,000 companies in the United States (with market capitalizations over $1 billion) tell an interesting tale. This is one of those rare 10-year periods over which the S&P 500 index was actually down. According to our backtests, on the other hand, the formula managed to earn 255 percent during this same period (more than tripling our money!). That’s a 13.5 percent annualized return during a 10-year period when the S&P index was actually down 0.9 percent per year. TABLE A.1 Updated Magic Formula Results Through 2009 But here’s the thing. Even during this great 10-year period of outperformance by the formula, investors would still have had to suffer through plenty of poor performance.

Long term, then, being uncooperative over the short term is likely a good characteristic. It is not easy to find an effective short-term hedging strategy for our favorite magic formula stocks. As a result, most of the benefits of the formula will continue to go to the much smaller group of investors who can maintain a true long-term horizon. One additional characteristic of the magic formula strategy is not necessarily good or bad. However, based on our updated backtests, it’s probably helpful to keep this one in mind. Over the last 22 years, when comparing the performance of the magic formula portfolios during up months for the S&P 500 and down months for the same index, it turns out that much of the outperformance of our portfolios comes during the up months. On average during this 22-year period, the magic formula portfolios “captured” 95 percent of the S&P 500’s performance during down months and 140 percent of its performance during up months.


The Permanent Portfolio by Craig Rowland, J. M. Lawson

Andrei Shleifer, asset allocation, automated trading system, backtesting, bank run, banking crisis, Bernie Madoff, buy and hold, capital controls, correlation does not imply causation, Credit Default Swap, diversification, diversified portfolio, en.wikipedia.org, fixed income, Flash crash, high net worth, High speed trading, index fund, inflation targeting, margin call, market bubble, money market fund, new economy, passive investing, Ponzi scheme, prediction markets, risk tolerance, stocks for the long run, survivorship bias, technology bubble, transaction costs, Vanguard fund

The process of looking at historical data to determine how a particular investment strategy may perform going forward is called “backtesting.” It's a good idea, however, to be cautious about drawing any firm conclusions about an investment strategy based solely on backtesting data. A better approach is to use backtesting as a tool to prove or disprove general ideas about a strategy, rather than to mechanically project past performance into the future. Backtesting can tell an investor how well an investment strategy's theories have worked in practice during historical periods. Thus, if something failed to work in the past there may be good reason to believe that it won't work in the future either. Repeating mistakes and leaving to faith something that has failed once before is a bad (and likely expensive) strategy. Backtesting then can be an excellent tool to disprove theories or to provide a tentative validation of the way a theory would have worked in practice in the past.

The portfolio was first conceived in the late 1970s and put into its final form in the mid-1980s. In other words, at this point we have more than 30 years of empirical data to review to determine how the portfolio's theories have performed under real-world conditions. While this chapter provides performance data for the Permanent Portfolio, it also provides a framework for assessing other portfolio strategies as well. Two important questions that backtesting data can help to answer are: How do investing strategies fail and what actually causes them to fail? So how has the Permanent Portfolio done compared to other investment strategies? What sets it apart? Let's take a look. Growth, No Large Losses, and Real Returns: The Holy Trinity According to historical data, the Permanent Portfolio strategy has provided investment returns of 9 to 10 percent a year for the past 40 years.1 The worst loss in any year was around −5 percent back in 1981.

See also Bonds; Cash; Gold; Stocks asset class correlations, relying on asset correlation to economy assets in isolation balanced diversification of (see Diversification) hard assets, neglecting implementation strategies on Permanent Portfolio approach to rebalancing (see Rebalancing and maintenance) risk in one asset type risk-sharing assets tax considerations with Australia, economy and investments in Austria, economy and investments in Automatic reinvestments Backtesting performance Bank of America Corp. Banks and financial institutions. See also specific companies cash investments in central (see Central banks) certificates of deposit from foreign or international (see International investments) fund manager risks in gold buying and storage at institutional diversification among multiple safe deposit boxes of Basler Kantonbank Bear Stearns Belgium, economy and investments in Black Monday BMO: BMO Dow Jones Canada Titans 60 ETF BMO Long Federal Bond ETF BMO Short Federal Bond ETF Bogle, Jack Bonds: 25/75 portfolio including 50/50 portfolio including 60/40 portfolio strategy including 75/25 portfolio including asset class correlations with benefits of bond risk matrix bonds to avoid buying methods for capital gains from cash investment in corporate costs associated with economic conditions impacting expectations about financial safety of implementation of strategy including interest rate inverse correlation to international junk or high-yield mortgage municipal owning performance of Permanent Portfolio including recommended bonds to buy retirement plans including risks related to secondary market for tax considerations with Treasury Inflation-Protected Securities as U.S. savings bonds U.S.


The Intelligent Asset Allocator: How to Build Your Portfolio to Maximize Returns and Minimize Risk by William J. Bernstein

asset allocation, backtesting, buy and hold, capital asset pricing model, commoditize, computer age, correlation coefficient, diversification, diversified portfolio, Eugene Fama: efficient market hypothesis, fixed income, index arbitrage, index fund, intangible asset, Long Term Capital Management, p-value, passive investing, prediction markets, random walk, Richard Thaler, risk tolerance, risk-adjusted returns, risk/return, South Sea Bubble, stocks for the long run, survivorship bias, the rule of 72, the scientific method, time value of money, transaction costs, Vanguard fund, Yogi Berra, zero-coupon bond

For example, a “simpleton’s portfolio” consisting of one quarter each U.S. large stocks, U.S. small stocks, foreign stocks, and U.S. high-quality bonds had a higher return, with much lower risk, than large U.S. stocks alone (represented by the S&P 500 index). The S&P 500, in turn, performed better than 75% of professional money managers over the same period. I was fascinated by the T. Rowe Price data; here was a simple tool for ascertaining historical asset allocation performance—collect data on the prior performance of various asset classes, and “backtest” returns and risks. To my disappointment, I could find no readily available software which accomplished this; I would have to write my own spreadsheet files. I began to buy, beg, steal or borrow data on a wide variety of assets over several different historical epochs and build portfolio models going back as far as 1926. The calculations performed by T. Rowe Price and myself contained an important implicit assumption: that the portfolios were vii viii Preface “rebalanced” periodically.

Next assume that you can tolerate only 10% SD of risk. Clearly, at this level the use of 5-year notes is superior to the other two bond choices; over most of its extent it lies above the other two curves, indicating that for each degree of risk the 5-year notes and stock mix yields more return. Only at low risk levels is the use of T-bills desirable. Portfolio simulations with other databases using both backtesting and another technique called mean-variance Figure 4-2. Stock/bond mixes, 1926–1998. The Behavior of Real-World Portfolios 45 analysis also suggest the superiority of short-term bonds. On occasion it may be advantageous to use long-term bonds or T-bills in small amounts. In general, however, you will not go far wrong by sticking to bond maturities of six months to five years for the risk-diluting portion of your portfolio.

And, the reversal in fortunes in the foreign-versus-domestic pony race of the past 20 years may turn out to be equally anomalous. Who knows whether foreign or domestic stocks will have the higher return The Behavior of Real-World Portfolios 53 over the next 20, 30, or even 50 years? However, it seems highly likely that a 50/50 mix will not be too far from the best foreignversus-domestic allocation. The real purpose of portfolio backtesting, mean-variance analysis, or any other kind of portfolio analysis is not to find the “best” asset mix. Rather, it is to find a portfolio mix that will not be too far off the mark under a wide variety of circumstances. Small Stocks versus Large Stocks It’s important to realize how large and small stocks behave relative to each other. Until recently it was generally accepted that small stocks had higher returns than large stocks.


pages: 320 words: 33,385

Market Risk Analysis, Quantitative Methods in Finance by Carol Alexander

asset allocation, backtesting, barriers to entry, Brownian motion, capital asset pricing model, constrained optimization, credit crunch, Credit Default Swap, discounted cash flows, discrete time, diversification, diversified portfolio, en.wikipedia.org, fixed income, implied volatility, interest rate swap, market friction, market microstructure, p-value, performance metric, quantitative trading / quantitative finance, random walk, risk tolerance, risk-adjusted returns, risk/return, Sharpe ratio, statistical arbitrage, statistical model, stochastic process, stochastic volatility, Thomas Bayes, transaction costs, value at risk, volatility smile, Wiener process, yield curve, zero-sum game

For instance, in Section II.5.5.3 we describe a pairs trade between the volatility index futures that have recently started trading on the CBOE and Eurex. Whenever a regression model is used to develop a trading strategy it is very important to backtest the model. Backtesting – which is termed out-of -sample testing or post-sample prediction by academics – is particularly important when considerable sums of money are placed on the output of a regression model. It is absolutely imperative to put the model through the rigorous testing procedure using a long period of historical data. A simple backtest proceeds as follows:28 1. Estimate the regression model on an historical sample of data on the variables, saving subsequent historical data for testing the model. 2. Make an investment that is determined by the estimated model.

What are the characteristics of P&L that we desire? For trading strategies we may look for strategies that produce a high Sharpe ratio, or that maximize some other risk adjusted performance measure, as outlined in Section I.6.5. But for a pure hedging strategy we should seek a hedge ratio that reduces risk alone, for instance one that minimizes the variance of the P&L. 28 This is an example of the general backtests described in Section II.8.5. 184 Quantitative Methods in Finance I.4.7 SUMMARY AND CONCLUSIONS This chapter has laid the foundations of regression analysis in the simple framework of linear models. We focus on the ordinary least squares (OLS) estimation criterion since this is optimal under fairly general circumstances when we have large samples of data, as is often the case in market risk analysis.

Statistical Tables 274 Statistical Tables Statistical Tables 275 276 Statistical Tables Statistical Tables 277 Index Abnormal return, CAPM 253 Absolute return 58 Absolute risk tolerance 231 Absolute value function 6 Active return 92, 256 Active risk 256 Alternative hypothesis 124, 151 American option 1, 215–16 Amex case study 144–6, 153–5 Amex Oil index 162–3, 169–70, 174 Analysis of variance (ANOVA) Amex case study 154 BHP Billiton Ltd case study 164–5 matrix notation 159–60 regression 143–4, 149–50 Analytic solution 185 Anderson–Darling test 128–9 ANOVA (analysis of variance) Amex case study 154 BHP Billiton Ltd case study 164–5 matrix notation 159–60 regression 143–4, 149–50 Appraisal ratio 257 Approximate confidence interval 122 Approximations delta–gamma 2, 34 delta–gamma–vega 34 duration–convexity 2–3, 34 finite difference 206–10, 223 Taylor expansion 31–4, 36 Arbitrage no arbitrage 2, 179–80, 211–13 pricing theory 257 statistical strategy 182–3 Arithmetic Brownian motion 22, 136, 138–9 Arrival rate, Poisson distribution 87–9 Ask price 2 Asset management, global 225 Asset prices binomial theorem 85–7 lognormal distribution 213–14 pricing theory 179–80, 250–55 regression 179–80 stochastic process 137–8 Assets, tradable 1 Asymptotic mean integrated square error 107 Asymptotic properties of OLS estimators 156 Autocorrelation 175–9, 184, 259–62 Autocorrelation adjusted Sharpe ratio 259–62 Autoregression 135 Auxiliary regression 177 Backtesting 183 Backward difference operator 19 Bandwidth, kernel 106–7 Bank 225 Barra model 181 Basic calculus 1–36 Basis splines 200 Bayesian probability 72–3 Bermudan option 1 Bernoulli trial 85–6 Best fit of function to data 201 Best fit line 145 Best linear unbiased estimator (BLUE) 157, 175 280 Index Beta values CAPM 252–3 diversifiable risk 181 OLS estimation 147–8, 156, 160–1, 183–4 regression 156 Bid–ask spread 2 Bid price 2 Bilinear interpolation 193–5 Binomial distribution 85–7, 213 Binomial lattices 186, 210–16, 223 American option 215–16 European option 212–13 lognormal asset price distribution 213–14 no arbitrage 211–12 risk neutral valuation 211–12 Bisection method, iteration 187–8 Bivariate distribution 108–9, 116–17, 148 Bivariate normal distribution 116–17, 148 Bivariate normal mixture distribution 116–17 Black–Scholes–Merton pricing model asset price 137–9 European option 2, 213, 215–16 lognormal distribution 94 numerical method 185 Taylor expansion 2–3 BLUE (best linear unbiased estimator) 157, 175 Bonds 1–2, 37, 191 Bootstrap 218 Brownian motion 136 arithmetic 22, 136, 139 geometric 21–2, 134, 138, 212, 213–14, 218–19 Calculus 1–36 differentiation 10–15 equations and roots 3–9 financial returns 16–26 functions 3–9, 26–31 graphs 3–9 integration 15–16 Taylor expansion 31–4, 36 Calibration 201 Call option 1, 6, 212–13 Capital allocation, bank 225 Capital asset pricing model (CAPM) 179–80, 252–5, 257–8 Capital market line (CML) 250–2 CAPM (capital asset pricing model) 179, 252–5, 257–8 CARA (constant absolute risk aversion) 233–4 Cartesian plane/space 39 Case studies Amex 144–6, 153–5 BHP Billiton Ltd 162–5, 168–70, 174–5, 177–8 credit risk 171–3 EM algorithm 203–6 PCA of European equity index 67–9 time series of asset price 218–20 Cauchy distribution 105 CBOE Gold index 162–3, 168–70, 174 Central limit theorem 120–1 Centre of location of distribution 78–9 Certainty equivalence 227–9 Characteristic equation 51–2 Chi-squared distribution 100–1, 123–4 Cholesky decomposition 37–8, 61–3, 70 Cholesky matrix 62–3, 70, 220–2 Circulant matrix 178 Classical probability 72–3 CML (capital market line) 250–2 Coefficients OLS estimators 155 regression 143–4, 151–2, 155, 168–9 risk aversion 231–4, 237 risk tolerance 233 Cokurtosis, CAPM 255 Complement, probability 73 Complete market 212 Compounding factor, returns 22–3 Concave function 13–14, 35 Conditional confidence interval 169 Conditional distribution 108–9 Conditional mean equation, OLS 148 Conditional probability 73 Conditional value at risk 105 Confidence interval 72, 118–24, 167–70 Conjugate gradient 193 Consistent OLS estimators 156–8 Constant absolute risk aversion (CARA) 233–4 Constant relative risk aversion (CRRA) 232–4 Constant term, regression 143–4 Constrained optimization 29–31 Constraint, minimum variance portfolio 245–6 Continuous compounding, return 22–3 Continuous distribution 114 Continuous function 5–6, 35 Continuous time 134–9 long-short portfolio 21 mean reverting process 136–7 Index notation 16–17 P&L 19 random walks 136–7 stochastic process 134–9 Convergence, iteration 188–9 Convex function 13–14, 35 Copula 109–10 Correlation 111–14 beta value 147–8 simulation 220–2 Correlation matrix 38, 55–61, 70 eigenvalues/vectors 52–4, 59–61 PCA 64–5, 67–8, 70 positive definiteness 58–9 Coskewness, CAPM 255 Counting process 139 Coupon 1 Covariance 80, 110–2 Covariance matrix 37–8, 55–61, 70 eigenvalues/vectors 59–61 OLS estimation 159–60 PCA 64, 66–7, 70 positive definiteness 58–9 Cox–Ross–Rubinstein parameterization 215 Crank–Nicolson finite difference scheme 210 Credit risk case study 171–3 Criterion for convergence, iteration 188 Critical region, hypothesis test 124–5 Critical value 118–20, 122–3, 129 Cross-sectional data 144 CRRA (constant relative risk aversion) 232–4 Cubic spline 197–200 Cumulative distribution function 75 Currency option 195–7 Decision rule, hypothesis test 125 Decomposition of matrix 61–4 Definite integral 15–16 Definite matrix 37, 46–7, 54, 58–9, 70 Degree of freedom, Student t distribution Delta–gamma approximation 2–3, 34 Delta–gamma–vega approximation 34 Delta hedging 208, 211 Density function 75–7 binomial distribution 86 bivariate distribution 108–9 joint 114–15 leptokurtic 82–3 lognormal distribution 93 MLE 130–4 97–8 normal distribution 90–2, 97, 115–17 Poisson distribution 88 stable distribution 105–6 Student t distribution 97–100 Dependent variable 143 Derivatives 1–2 calculation 12–13 first 2, 10–11 partial 27–8, 35 second 2, 11, 13 total 31 Determinant 41–3, 47 Deterministic variable 75 Diagonalizable matrix 43 Diagonal matrix 40, 56 Dicky–Fuller test 136 Differentiable function 5–6, 35 Differential equations partial 2, 208–10 stochastic 134, 136 Differentiation 10–15 concave/convex function 13–14 definition 10–11 monotonic function 13–14 rule 11–12 stationary point 14–15 stochastic differential term 22 Diffusion process, Brownian motion 22 Discontinuity 5 Discrete compounding, return 22–3 Discrete time 134–9 log return 19–20 notation 16–17 P&L 19 percentage return 19–20 random walk model 135 stationary/integrated process 134–6 stochastic process 134–9 Discretization of space 209–10 Discriminant 5 Distribution function 75–7 Diversifiable risk 181 DJIA (Dow Jones Industrial Average) index 137–8 Dot product 39 Dow Jones Industrial Average (DJIA) index 137–8 Dummy variable 175 Duration–convexity approximation 2–3, 34 Durbin–Watson autocorrelation test 176–7 281 282 Index Efficiency, OLS estimators 156–7 Efficient frontier 246–9, 251 Efficient market hypothesis 179 Eigenvalues/vectors 37–8, 48–54, 70 characteristic equation 51–2 correlation matrix 52, 59–60 covariance matrix 59–61 definiteness test 54 definition 50–1 linear portfolio 59–61 matrices/transformations 48–50 properties 52–3 Elliptical distribution 115 EM (expectation–maximization algorithm) 203–6 Empirical distribution 77, 217–18 Enhanced indexation 182–3 Epanechnikov kernel 107 Equality of two mean 126–7 Equality of two variance 126–7 Equations 3–9 CML 252 heat equation 208–9 partial differential 2, 208–10 quadratic 4–5 roots 187 simultaneous equations 44–5 Equity index returns 96–7 Equity price 172 Equity volatility 172–3 Error process 145, 148, 155 ESS (explained sum of squares) 149–50, 159–62 Estimation calibration 201 MLE 72, 130–4, 141, 202–3 OLS 143–4, 146–9, 153–63, 170–1, 176 ETL (expected tail loss) 104–5 European equity indices case study 67–9 European options 1–2 American option 215–16 binomial lattice 212–13 interpolation 195–6 pricing 212–13, 215–16 Euro swap rate (1-year) 172 Excel BHP Billiton Ltd case study 163–4 binomial distribution 213 chi-squared distribution 123–4 critical values 118–20, 122–3 Goal Seek 186, 188–9 histogram 77–8 matrix algebra 40, 43–6, 53–4, 59, 63–4, 68, 70 moments of a sample 82–3 multiple regression 163–4 normal probabilities 90–1 OLS estimation 153–5 percentiles 83–5 Poisson distribution 88 random numbers 89 simulation 217, 219 Solver 186, 190–1, 246 Student t distribution 100, 122–3 Expectation–maximization (EM) algorithm 203–6 Expected tail loss (ETL) 104–5 Expected utility 227–8 Expected value (expectation) 78–9 Explained sum of squares (ESS) 149–50, 159–62 Explanatory variables 143, 157, 170 Explicit function 185 Exponential distribution 87–9 Exponential function 1, 7–9, 34–5, 233–7 Exponential utility function 233–7 Extrapolation 186, 193–200, 223 Extreme value theory 101–3 Factorial function 8 Factorial notation 86 Factor model software 181 F distribution 100–1, 127 Feasible set 246 Finance calculus 1–36 essential algebra 37–70 numerical methods 185–223 portfolio mathematics 225–67 Financial market integration 180–1 Finite difference approximation 186, 206–10, 223 first/second order 206–7 the Greeks 207–8 partial differential equations 208–10 First difference operator, discrete time 17 First order autocorrelation 178 Forecasting 182, 254 Forward difference operator, returns 19, 22 Index Fréchet distribution 103 F test 127 FTSE 100 index 204–5, 242–4 Fully-funded portfolio 18 Functions 3–9, 26–31 absolute value 6 concave 13–14, 35 continuous 5–6, 35 convex 13–14, 35 differentiable 5–6, 35 distribution function 75–7, 114–15 explicit/implicit 185 exponential 1, 7–9, 34–5, 234–7 factorial 8 gamma 97–8 indicator 6 inverse 6–7, 35 Lagrangian 29–30 likelihood 72, 130–4 linear 4–5 logarithmic 1, 9, 34–5 monotonic 13–14, 35 non-linear 1–2 objective 29, 188 quadratic 4–5, 233–4 several variables 26–31 utility 232–4 Fundamental theorem of arbitrage 212 Future 1, 181–2, 194 Gamma function, Student t distribution 97–8 Gaussian copula 109–10 Gaussian kernel 107 Gauss–Markov theorem 157, 175, 184 Generalized extreme value (GEV) distribution 101–3 Generalized least squares (GLS) 178–9 Generalized Pareto distribution 101, 103–5 Generalized Sharpe ratio 262–3 General linear model, regression 161–2 Geometric Brownian motion 21–2 lognormal asset price distribution 213–14 SDE 134 stochastic process 141 time series of asset prices 218–20 GEV (generalized extreme value) distribution 101–3 Global asset management 225 Global minimum variance portfolio 244, 246–7 283 GLS (generalized least squares) 178–9 Goal Seek, Excel 186, 188–9 Gold index, CBOE 162–3, 168–70, 174 Goodness of fit 128, 149–50, 163–5, 167 Gradient vector 28 Graphs 3–9 The Greeks 207–8 Gumbel distribution 103 Heat equation 208 Hedging 2, 181–2 Hermite polynomials 200 Hessian matrix 28–30, 132, 192–3 Heteroscedasticity 175–9, 184 Higher moment CAPM model 255 Histogram 76–8 Historical data, VaR 106 Homoscedasticity 135 h-period log return 23–4 Hyperbola 5 Hypothesis tests 72, 124–5 CAPM 254–5 regression 151–2, 163–6 Student t distribution 100 Identity matrix 40–1 i.i.d.


How I Became a Quant: Insights From 25 of Wall Street's Elite by Richard R. Lindsey, Barry Schachter

Albert Einstein, algorithmic trading, Andrew Wiles, Antoine Gombaud: Chevalier de Méré, asset allocation, asset-backed security, backtesting, bank run, banking crisis, Black-Scholes formula, Bonfire of the Vanities, Bretton Woods, Brownian motion, business cycle, business process, butter production in bangladesh, buy and hold, buy low sell high, capital asset pricing model, centre right, collateralized debt obligation, commoditize, computerized markets, corporate governance, correlation coefficient, creative destruction, Credit Default Swap, credit default swaps / collateralized debt obligations, currency manipulation / currency intervention, discounted cash flows, disintermediation, diversification, Donald Knuth, Edward Thorp, Emanuel Derman, en.wikipedia.org, Eugene Fama: efficient market hypothesis, financial innovation, fixed income, full employment, George Akerlof, Gordon Gekko, hiring and firing, implied volatility, index fund, interest rate derivative, interest rate swap, John von Neumann, linear programming, Loma Prieta earthquake, Long Term Capital Management, margin call, market friction, market microstructure, martingale, merger arbitrage, Myron Scholes, Nick Leeson, P = NP, pattern recognition, Paul Samuelson, pensions crisis, performance metric, prediction markets, profit maximization, purchasing power parity, quantitative trading / quantitative finance, QWERTY keyboard, RAND corporation, random walk, Ray Kurzweil, Richard Feynman, Richard Stallman, risk-adjusted returns, risk/return, shareholder value, Sharpe ratio, short selling, Silicon Valley, six sigma, sorting algorithm, statistical arbitrage, statistical model, stem cell, Steven Levy, stochastic process, systematic trading, technology bubble, The Great Moderation, the scientific method, too big to fail, trade route, transaction costs, transfer pricing, value at risk, volatility smile, Wiener process, yield curve, young professional

In a key lucky break, Goldman Sachs decided (at our prodding) to seed with partner capital a very aggressive market-neutral hedge fund utilizing our new investment process.14 Although we had very strong results in general across many products (both long only and absolute return), over the next few years our results for this hedge fund were off the charts. These results were not just great, but much better than our own backtests, a key sign you’re getting at least somewhat lucky as an iron-clad rule is to expect results worse than your backtest. Don’t get me wrong, I think we created some great models, but getting a lucky draw on top of a great model is a pretty wonderful thing to happen early in your career. (As they say in the novel Dune, “Beginnings are delicate times.”)15 A few years down the road, we were managing $7 billion, about $6 billion in long-only assets, and close to a billion in hedge fund assets, all with strong-to-stellar results.

My work during that next year was incredibly rewarding. The focus of the fund was to create automated trading strategies and apply them to global futures markets, including commodities, equities, and fixed income. As long as it was a valid futures market, we traded it, regardless if the prices represented Eurodollar contracts or Red Azuki Beans. I spent a lot of time writing very complex code to create and backtest different types of trading strategies using daily futures data back to the 1940s. Oodles of data, challenging analyses, and lots of programming— this is exactly what I had been doing in physics for a dozen years, and I was groovin’. But alas, I quickly came to realize that finance is not rocket science. After all, I was a rocket scientist and I knew the difference. This is because in physics statistical distributions arise from fundamental physical processes that can usually be modeled, and therefore future distributions can be predicted with amazing accuracy.

As a two-person operation, my first task was simple—recreate, from scratch, everything that the previous 30-person fund had done, but in a way that could be wholly automated and required no additional staff. Over that next year I coded day and night, and even purchased a $20,000 Sun SparcStation laptop (that’s right, a laptop) so I could code during my two-hour-per-day train commute. I was in heaven. I created my own futures backtesting language, a byte-code compiler, and an automated web-based trading system. With these tools I developed many new styles of trend following that had never been done before at the previous fund. Each night the system would upload the latest closing prices for each futures market, rerun my simulation routines, generate signals, and auto-fax trades to our London brokers for execution the next morning.


pages: 467 words: 154,960

Trend Following: How Great Traders Make Millions in Up or Down Markets by Michael W. Covel

Albert Einstein, Atul Gawande, backtesting, beat the dealer, Bernie Madoff, Black Swan, buy and hold, buy low sell high, capital asset pricing model, Clayton Christensen, commodity trading advisor, computerized trading, correlation coefficient, Daniel Kahneman / Amos Tversky, delayed gratification, deliberate practice, diversification, diversified portfolio, Edward Thorp, Elliott wave, Emanuel Derman, Eugene Fama: efficient market hypothesis, Everything should be made as simple as possible, fiat currency, fixed income, game design, hindsight bias, housing crisis, index fund, Isaac Newton, John Meriwether, John Nash: game theory, linear programming, Long Term Capital Management, mandelbrot fractal, margin call, market bubble, market fundamentalism, market microstructure, mental accounting, money market fund, Myron Scholes, Nash equilibrium, new economy, Nick Leeson, Ponzi scheme, prediction markets, random walk, Renaissance Technologies, Richard Feynman, risk tolerance, risk-adjusted returns, risk/return, Robert Shiller, Robert Shiller, shareholder value, Sharpe ratio, short selling, South Sea Bubble, Stephen Hawking, survivorship bias, systematic trading, the scientific method, Thomas L Friedman, too big to fail, transaction costs, upwardly mobile, value at risk, Vanguard fund, William of Occam, zero-sum game

Edward de Bono This page intentionally left blank Trading System Example from Mechanica F “Part of back-testing is to determine position sizing and risk management strategies that fit within your drawdown tolerance envelope.” —Ed Seykota1 In this appendix, Bob Spear shows how a trader might construct a simple, mechanical trend following system on Trading Recipes Portfolio Engineering Software. His newest software, surpassing Trading Recipes, is called Mechanica (www.mechanicasoftware. com). For this example we start with a broad look at the system’s trading ideas, which echo many of the ideas discussed in this book. We construct a hypothetical portfolio and run a backtest up to a certain point in time. Then, we examine in detail how the software enters, sizes, and manages a trade. Afterwards, we run our backtest to the end of our data and examine the results without and with money management.

Risk management is to direct and control the possibility of loss. The activities of a risk manager are to measure risk and to increase and decrease risk by buying and selling stock. In general, good risk management combines several elements: 1. Clarifying trading and risk management systems until they can translate to computer code. 2. Inclusion of diversification and instrument selection into the back-testing process. 3. Back-testing and stress-testing to determine trading parameter sensitivity and optimal values. 4. Clear agreement of all parties on expectation of volatility and return. 5. Maintenance of supportive relationships between investors and managers. 6. Above all, stick to the system. 7. See #6, above. As you navigate this chapter, keep in mind Seykota’s wisdom. Chapter 10 • Trading Systems Five Questions for a Trading System Answer the following five questions and you have the core components of a trend following trading system and you are on your way to having your edge: 1.

He was one of the only people at the time who was doing simulation of any kind. He was generous with his ideas, making a point to share what he knew; it delighted him to get others to try systems. He inspired a great many people and spawned a whole generation of traders, providing courage and a road map. Ed Seykota97 We started our database using punch cards in 1968, and we collected commodity price data back to July 1959. We back-tested the 5 and 20 and the weekly rules for Dick. I think the weekly method was the best thing that anyone had ever done. Of all Dick’s contributions, the weekly rules helped identify the trend and helped you act on it. Dick is one of those people who today likes to beat the computer—only he did it by hand. He enjoyed the academics of the process, the excitement of exploring new ideas and running the numbers.


pages: 156 words: 15,746

Personal Finance with Python by Max Humber

asset allocation, backtesting, bitcoin, cryptocurrency, en.wikipedia.org, Ethereum, passive income, web application

prices = historical_prices.loc['2016-01-04'] prices AMZN 636.99 CSCO 26.41 GE 30.71 Name: 2016-01-04 00:00:00, dtype: float64 Portfolio Because I know you just want to see the full thing in action, here it is: portfolio = instantiate_portfolio(targets, 100000.00) prices = historical_prices.loc['2017-01-01'] update_prices(portfolio, prices) order = get_order(portfolio) simulate_process_order(portfolio, order) portfolio.market_value.sum() 100000.0 This will be our starting portfolio: print(portfolio) date price target allocation shares market_value AMZN 2017-01-01 749.87 0.4 0.397431 53 39743.1 CSCO 2017-01-01 30.22 0.3 0.299782 992 29978.2 GE 2017-01-01 31.6 0.3 0.299884 949 29988.4 CASH 2017-01-01 1 0 0.0029025 290.25 290.25 Rebalance To test our rebalancing logic, we’ll back-test across 2017 and execute orders on a quarterly-end frequency by using the Q offset alias from pandas. dates = pd.date_range('2017-01-01', '2017-12-31', freq="Q").tolist() for d in dates: prices = historical_prices.loc[d] update_prices(portfolio, prices) order = get_order(portfolio) print(f'{d}:\n{order}') simulate_process_order(portfolio, order) portfolio.market_value.sum() 2017-03-31 00:00:00: AMZN -4 CSCO -24 GE 149 dtype: object 2017-06-30 00:00:00: AMZN -5 CSCO 63 GE 97 dtype: object 2017-09-30 00:00:00: AMZN 0 CSCO -83 GE 124 dtype: object 2017-12-31 00:00:00: AMZN -7 CSCO -79 GE 589 dtype: object 111030.14 After four rebalancing moves, we can verify that our portfolio will follow and maintain target allocations quite closely.

print(portfolio) date price target allocation shares market_value AMZN 2017-12-31 1169.47 0.4 0.389718 37 43270.4 CSCO 2017-12-31 38.3 0.3 0.299763 869 33282.7 GE 2017-12-31 17.45 0.3 0.29987 1908 33294.6 CASH 2017-12-31 1 0 0.0106498 1182.45 1182.45 Conclusion In this chapter, you learned how to build a portfolio in pandas, update values in a DataFrame, generate buy and sell orders that aim to hold target allocations in balance, retrieve stock quotes from Alpha Vantage, and simulate back-testing. If you want to actually put these pieces to work, you will need to set up an account with an online brokerage and manually exercise buy and sell orders on its platform. The good news is that if you think that rebalancing is an appropriate investment strategy for you, you don’t actually have to do it that often. If you adhere to monthly or quarterly rebalancing, you’ll be money! Footnotes 1 https://www.moneyunder30.com/rebalance-your-portfolio 2 https://finance.yahoo.com/quote/AMZN 3 https://finance.yahoo.com/quote/CSCO 4 https://finance.yahoo.com/quote/GE 5 https://www.alphavantage.co/ 6 http://vita.had.co.nz/papers/tidy-data.pdf © Max Humber 2018 Max HumberPersonal Finance with Pythonhttps://doi.org/10.1007/978-1-4842-3802-8_7 7.


pages: 502 words: 107,657

Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die by Eric Siegel

Albert Einstein, algorithmic trading, Amazon Mechanical Turk, Apple's 1984 Super Bowl advert, backtesting, Black Swan, book scanning, bounce rate, business intelligence, business process, butter production in bangladesh, call centre, Charles Lindbergh, commoditize, computer age, conceptual framework, correlation does not imply causation, crowdsourcing, dark matter, data is the new oil, en.wikipedia.org, Erik Brynjolfsson, Everything should be made as simple as possible, experimental subject, Google Glasses, happiness index / gross national happiness, job satisfaction, Johann Wolfgang von Goethe, lifelogging, Machine translation of "The spirit is willing, but the flesh is weak." to Russian and back, mass immigration, Moneyball by Michael Lewis explains big data, Nate Silver, natural language processing, Netflix Prize, Network effects, Norbert Wiener, personalized medicine, placebo effect, prediction markets, Ray Kurzweil, recommendation engine, risk-adjusted returns, Ronald Coase, Search for Extraterrestrial Intelligence, self-driving car, sentiment analysis, Shai Danziger, software as a service, speech recognition, statistical model, Steven Levy, text mining, the scientific method, The Signal and the Noise by Nate Silver, The Wisdom of Crowds, Thomas Bayes, Thomas Davenport, Turing test, Watson beat the top human players on Jeopardy!, X Prize, Yogi Berra, zero-sum game

But he was not granted access to the predictive model. With secrecy reigning supreme, the protocol for this type of audit dictated that John receive only the numerical results, along with a few adjectives that described its design: new, unique, powerful! With meager evidence, John sought to prove a crime he couldn’t even be sure had been committed. Before each launch, organizations establish confidence in PA by “predicting the past” (aka backtesting). The predictive model must prove itself on historical data before its deployment. Conducting a kind of simulated prediction, the model evaluates across data from last week, last month, or last year. Feeding on input that could only have been known at a given time, the model spits out its prediction, which then matches against what we now already know took place thereafter. Would the S&P 500 go down or up on March 21, 1991?

On a hunch, he hand-crafted a method with the same type of bug, and showed that its predictions closely matched those of the trading system. A predictive model will sink faster than the Titanic if you don’t seal all its “time leaks” before launch. But this kind of “leak from the future” is common, if mundane. Although core to the very integrity of prediction, it’s an easy mistake to make, given that each model is backtested over historical data for which prediction is not, strictly speaking, possible. The relative future is always readily available in the testing data, easy to inadvertently incorporate into the very model trying to predict it. Such temporal leaks achieve status as a commonly known gotcha among PA practitioners. If this were an episode of Star Trek, our beloved, hypomanic engineer Scotty would be screaming, “Captain, we’re losing our temporal integrity!”

He had also taken on predicting the species of a bat from its echolocation signals (the chirps bats make for their radar). And in the commercial world, John’s pregrad positions had dropped him right into the thick of machine learning systems that steer for aerospace flight and that detect cooling pipe cracks in nuclear reactors, not to mention projects for Delta Financial looking over the shoulders of other black box quants. And now John’s latest creation absolutely itched to be deployed. Backtesting against historical data, all indications whispered confident promises for what this thing could do once set in motion. As John puts it, “A slight pattern emerged from the overwhelming noise; we had stumbled across a persistent pricing inefficiency in a corner of the market, a small edge over the average investor, which appeared repeatable.” Inefficiencies are what traders live for. A perfectly efficient market can’t be played, but if you can identify the right imperfection, it’s payday.


pages: 1,088 words: 228,743

Expected Returns: An Investor's Guide to Harvesting Market Rewards by Antti Ilmanen

Andrei Shleifer, asset allocation, asset-backed security, availability heuristic, backtesting, balance sheet recession, bank run, banking crisis, barriers to entry, Bernie Madoff, Black Swan, Bretton Woods, business cycle, buy and hold, buy low sell high, capital asset pricing model, capital controls, Carmen Reinhart, central bank independence, collateralized debt obligation, commoditize, commodity trading advisor, corporate governance, credit crunch, Credit Default Swap, credit default swaps / collateralized debt obligations, debt deflation, deglobalization, delta neutral, demand response, discounted cash flows, disintermediation, diversification, diversified portfolio, dividend-yielding stocks, equity premium, Eugene Fama: efficient market hypothesis, fiat currency, financial deregulation, financial innovation, financial intermediation, fixed income, Flash crash, framing effect, frictionless, frictionless market, G4S, George Akerlof, global reserve currency, Google Earth, high net worth, hindsight bias, Hyman Minsky, implied volatility, income inequality, incomplete markets, index fund, inflation targeting, information asymmetry, interest rate swap, invisible hand, Kenneth Rogoff, laissez-faire capitalism, law of one price, London Interbank Offered Rate, Long Term Capital Management, loss aversion, margin call, market bubble, market clearing, market friction, market fundamentalism, market microstructure, mental accounting, merger arbitrage, mittelstand, moral hazard, Myron Scholes, negative equity, New Journalism, oil shock, p-value, passive investing, Paul Samuelson, performance metric, Ponzi scheme, prediction markets, price anchoring, price stability, principal–agent problem, private sector deleveraging, purchasing power parity, quantitative easing, quantitative trading / quantitative finance, random walk, reserve currency, Richard Thaler, risk tolerance, risk-adjusted returns, risk/return, riskless arbitrage, Robert Shiller, Robert Shiller, savings glut, selection bias, Sharpe ratio, short selling, sovereign wealth fund, statistical arbitrage, statistical model, stochastic volatility, stocks for the long run, survivorship bias, systematic trading, The Great Moderation, The Myth of the Rational Market, too big to fail, transaction costs, tulip mania, value at risk, volatility arbitrage, volatility smile, working-age population, Y2K, yield curve, zero-coupon bond, zero-sum game

January excess return has been negative in only four years since 1976: in 1992, 1995, 1998, and 2008—each of which saw much worse drawdowns from the carry strategy later in the year. This outcome may be a coincidence but it echoes the finding that January equity market performance has some predictive ability for rest-of-the-year returns (further details in Chapter 25). Incorporating these two seasonal biases would easily improve backtested FX carry strategy performance—for example, doubling position sizes for January and halving sizes for the rest of the year if the January return had been negative would have boosted the Sharpe ratio since 1983 from 0.6 to 0.8. Any such backtest improvements are subject to data-mining bias, so some skepticism is warranted. Conditioners (regime indicators) As we have seen, ex ante opportunities and seasonal effects have some ability to predict carry returns. However, the jackpot question of carry “timing” relates to carry crashes.

Periods of high realized returns and rising asset valuations—think stock markets in the 1990s—are often associated with falling forward-looking returns. • For specific funds and strategies, the historical performance data that investors get to see are often upward biased. This bias is due to the voluntary nature of performance reporting and survivorship bias (so that poor performers are left out of databases or are not marketed by the fund manager). A similar caveat applies to simulated “paper” portfolios because backtests may be overfitted and trading costs ignored or understated. These concerns notwithstanding, this book presents extensive evidence of long-run realized returns, when possible covering 50-to-100-year histories. Several main findings are familiar to most readers:• Stock markets have outperformed fixed income markets during the past century in all countries studied. The compound average real return for global equities between 1900 and 2009 is 5.4%, which is 3.7% (4.4%) higher than that of long-term government bonds (short-dated Treasury bills).

Pension funds match their liabilities best by buying long-dated real or nominal bonds. 4.6 BIASED RETURNS For many asset classes, returns may be positively or negatively biased over a given historical sample. For active asset managers with voluntary reporting, published returns are almost certainly upward biased. Section 11.4 reviews a host of selection biases such as survivorship bias and backfill bias in the context of hedge fund return databases, but similar caveats apply to the reported performance of other managers. Backtested results of active strategies also suffer from overfitting and data-mining biases, which also overstate published returns. Whenever we observe exceptionally attractive historical returns, it is healthy to adopt a skeptical approach. The financial industry has limited incentives to emphasize this needed skepticism beyond printing required disclaimers, while our innate tendencies for extrapolation and optimism make most of us too easy prey for the upbeat marketing of past performance. 4.7 NOTES [1] The distinction between realized (ex post) and expected (ex ante) returns should be crystal clear.


pages: 111 words: 1

Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets by Nassim Nicholas Taleb

Antoine Gombaud: Chevalier de Méré, availability heuristic, backtesting, Benoit Mandelbrot, Black Swan, commoditize, complexity theory, corporate governance, corporate raider, currency peg, Daniel Kahneman / Amos Tversky, discounted cash flows, diversified portfolio, endowment effect, equity premium, fixed income, global village, hedonic treadmill, hindsight bias, Kenneth Arrow, Long Term Capital Management, loss aversion, mandelbrot fractal, mental accounting, meta analysis, meta-analysis, Myron Scholes, Paul Samuelson, quantitative trading / quantitative finance, QWERTY keyboard, random walk, Richard Feynman, road to serfdom, Robert Shiller, Robert Shiller, selection bias, shareholder value, Sharpe ratio, Steven Pinker, stochastic process, survivorship bias, too big to fail, Turing test, Yogi Berra

This simple remark, possibly blurted out in a state of intoxication or extreme enthusiasm, caused Dos Passos to become required reading in European intellectual circles, as Sartre’s remark was mistaken for a consensus estimate of the quality of Dos Passos rather than what it was, the best remark. (In spite of such interest in his work, Dos Passos has reverted to obscurity.) The Backtester A programmer helped me build a backtester. It is a software program connected to a database of historical prices, which allows me to check the hypothetical past performance of any trading rule of average complexity. I can just apply a mechanical trading rule, like buy NASDAQ stocks if they close more than 1.83% above their average of the previous week, and immediately get an idea of its past performance. The screen will flash my hypothetical track record associated with the trading rule.

A Mischievous Child Replaces the Black Balls Seven THE PROBLEM OF INDUCTION FROM BACON TO HUME Cygnus Atratus Niederhoffer SIR KARL’S PROMOTING AGENT Location, Location Popper’s Answer Open Society Nobody Is Perfect Induction and Memory Pascal’s Wager THANK YOU, SOLON PART II: MONKEYS ON TYPEWRITERS • Survivorship and Other Biases IT DEPENDS ON THE NUMBER OF MONKEYS VICIOUS REAL LIFE THIS SECTION Eight TOO MANY MILLIONAIRES NEXT DOOR HOW TO STOP THE STING OF FAILURE Somewhat Happy Too Much Work You’re a Failure DOUBLE SURVIVORSHIP BIASES More Experts Visibility Winners It’s a Bull Market A GURU’S OPINION Nine IT IS EASIER TO BUY AND SELL THAN FRY AN EGG FOOLED BY NUMBERS Placebo Investors Nobody Has to Be Competent Regression to the Mean Ergodicity LIFE IS COINCIDENTAL The Mysterious Letter An Interrupted Tennis Game Reverse Survivors The Birthday Paradox It’s a Small World! Data Mining, Statistics, and Charlatanism The Best Book I Have Ever Read! The Backtester A More Unsettling Extension The Earnings Season: Fooled by the Results COMPARATIVE LUCK Cancer Cures Professor Pearson Goes to Monte Carlo (Literally): Randomness Does Not Look Random! The Dog That Did Not Bark: On Biases in Scientific Knowledge I HAVE NO CONCLUSION Ten LOSER TAKES ALL—ON THE NONLINEARITIES OF LIFE THE SANDPILE EFFECT Enter Randomness Learning to Type MATHEMATICS INSIDE AND OUTSIDE THE REAL WORLD The Science of Networks Our Brain Buridan’s Donkey or the Good Side of Randomness WHEN IT RAINS, IT POURS Eleven RANDOMNESS AND OUR MIND: WE ARE PROBABILITY BLIND PARIS OR THE BAHAMAS?

If enough trading rules are considered over time, some rules are bound by pure luck, even in a very large sample, to produce superior performance even if they do not genuinely possess predictive power over asset returns. Of course, inference based solely on the subset of surviving trading rules may be misleading in this context since it does not account for the full set of initial trading rules, most of which are unlikely to have underperformed. I have to decry some excesses in backtesting that I have closely witnessed in my private career. There is an excellent product designed just for that, called Omega TradeStation, that is currently on the market, in use by tens of thousands of traders. It even offers its own computer language. Beset with insomnia, the computderized day traders become night testers plowing the data for some of its properties. By dint of throwing their monkeys on typewriters, without specifying what book they want their monkey to write, they will hit upon hypothetical gold somewhere.


Trading Risk: Enhanced Profitability Through Risk Control by Kenneth L. Grant

backtesting, business cycle, buy and hold, commodity trading advisor, correlation coefficient, correlation does not imply causation, delta neutral, diversification, diversified portfolio, fixed income, frictionless, frictionless market, George Santayana, implied volatility, interest rate swap, invisible hand, Isaac Newton, John Meriwether, Long Term Capital Management, market design, Myron Scholes, performance metric, price mechanism, price stability, risk tolerance, risk-adjusted returns, Sharpe ratio, short selling, South Sea Bubble, Stephen Hawking, the scientific method, The Wealth of Nations by Adam Smith, transaction costs, two-sided market, value at risk, volatility arbitrage, yield curve, zero-coupon bond

As such, the accuracy of the model can be back-tested by checking how many P/L observations actually exceed the VaR figure over a given period of time and then tying the resulting percentage back to the confidence interval. If the two figures are consistent, the model can be said to be in good order; if not, then the model’s assumptions and inputs must be examined in order to determine the source of the inconsistency (more about this later). This type of backtest is one of the simplest mathematical exercises we will cover in this book, and I urge you to periodically check to see if your actual P/L volatility is consistent with the model and parameters you are using. Some purists will argue that comparing actual portfolio volatility to VaR results is not an effective backtest because while the VaR calculation is based on the current portfolio as measured against historical price action, the test measures the accuracy of this calculation against future price fluctuation. 99 The Risk Components of an Individual Portfolio However, I would ignore this distinction for the simple reason that if your actual P/L volatility differs substantially from the results implied in the VaR calculation, it is failing to tell you anything—irrespective of the fact that it may be faithfully adhering to its core assumption base.

Although the same The Risk Components of an Individual Portfolio 103 dollars are committed to the market from each portfolio, I would argue that the high-flying portfolio is more fully invested and has a higher degree of market participation, largely due to the fact that it will experience more acute financial impacts to any level of price action in the markets than will the dividend portfolio. VaR has no purpose other than to measure these concepts, and I particularly endorse your taking the steps necessary to understand the time series dynamics of the VaR calculation, starting with a routine periodic measurement of its predictive accuracy (i.e., the back-test). Study its ability to adhere to the confidence interval assumptions you have specified (as described in the previous Testing VaR Accuracy section). Once you have achieved a sufficient level of accuracy in this regard, you can then use the VaR statistic as yet another means of ensuring the efficient use of your capital. If your VaR numbers are increasing, it means that your exposure is on the rise; and (rationally speaking) this should only take place when (1) there is sufficient risk capital available to increase the bet and (2) profit opportunities are rising in a manner that justifies the increased exposure.


Hedgehogging by Barton Biggs

activist fund / activist shareholder / activist investor, asset allocation, backtesting, barriers to entry, Bretton Woods, British Empire, business cycle, buy and hold, diversification, diversified portfolio, Elliott wave, family office, financial independence, fixed income, full employment, hiring and firing, index fund, Isaac Newton, job satisfaction, margin call, market bubble, Mikhail Gorbachev, new economy, oil shale / tar sands, paradox of thrift, Paul Samuelson, Ponzi scheme, random walk, Ronald Reagan, secular stagnation, Sharpe ratio, short selling, Silicon Valley, transaction costs, upwardly mobile, value at risk, Vanguard fund, zero-sum game, éminence grise

Ned Davis says, “Go with the flow until it reaches an extreme and begins to reverse; it is at ccc_biggs_ch10_133-148.qxd 11/29/05 7:02 AM Page 145 The Battle for Investment Survival 145 that point where it often pays to be contrary.” Merrill Lynch did a fascinating study that back-tested the four main sentiment indicators and found that they had virtually no predictive powers at market peaks, although they were decent at bottoms. Morgan Stanley did a similar analysis of the VIX Index and came to a similar conclusion.This index, which measures complacency and which everyone tracks on a daily basis, currently has fallen to a very low level. That suggests a high degree of complacency and is, therefore, a bearish sign. However, when Morgan Stanley back-tested its predictive ability, it found VIX virtually useless in signaling tops. So much for the popular wisdom. It’s the old story. The character of markets is continually changing, and there is no single timing system that will consistently, indefinitely work.

ccc_biggs_ch02_9-20.qxd 11/29/05 6:57 AM Page 11 Golden Boys, but They Still Bleed Red 11 Sorry, but I think market neutral is a tough racket, particularly quantitative market neutral.There are too many people doing the same thing. In the 1980s, Morgan Stanley had a series of market-neutral funds that were run off different fundamental and quantitative models.All had boy geniuses at the controls who had made big money on the trading desk or had built a model that worked great on paper when they were dummy back-tested. In live action, none ever produced with real money. One allegedly used a computer to take stock selection to the third derivative and fifth dimension in color. There was color all right. The color of the P&L was a deep shade of red. Anyway, this attractive trio from Goldman opened on January 1, 2001, with $100 million and lost only 10% over the next two years, which wasn’t bad considering the bear market.Then in 2003, they were too cautious and bearish, and the fund went up only 5%.

Investing money in a new hedge fund is not an appealing thought. My only consolation is that this disdain, although a bad omen in terms of money coming in, is probably a good sign for our initial performance. This evening, at the dinner for us organized by Prime Brokerage, we made our usual presentation and then took questions. As always, we said we hoped to have long-term returns in excess of the S&P 500, but that our back-testing indicated volatility approximately comparable to that of the S&P 500. In other words, if our long-term rate of return is going to be about 15% per annum, our annual returns should fall in a range of +30% to zero—in other words, 15%, plus or minus 15%. A woman from a big fund of funds stood up and said she thought our volatility estimate of 15% was about right, but it made us unsuitable for many investors.


pages: 512 words: 162,977

New Market Wizards: Conversations With America's Top Traders by Jack D. Schwager

backtesting, beat the dealer, Benoit Mandelbrot, Berlin Wall, Black-Scholes formula, butterfly effect, buy and hold, commodity trading advisor, computerized trading, Edward Thorp, Elliott wave, fixed income, full employment, implied volatility, interest rate swap, Louis Bachelier, margin call, market clearing, market fundamentalism, money market fund, paper trading, pattern recognition, placebo effect, prediction markets, Ralph Nelson Elliott, random walk, risk tolerance, risk/return, Saturday Night Live, Sharpe ratio, the map is not the territory, transaction costs, War on Poverty

If the person is really hypnotized, you won’t be able to push the arm down, regardless of the force applied—even if the subject is a physically weak person. 446 / The New Market Wizard Was there anything memorable about your first trading client? I would like to say that the procedure was immensely successful, but the truth is that the person didn’t experience any overnight transformation. It took many years before I realized why hypnosis was very effective with some traders but not others. What is the reason? Some traders have a valid methodology that they have adequately backtested and that their conscious mind is happy with. These are the traders who can usually be helped through hypnosis. The only thing hypnosis can do is to inform the subconscious mind that the person now has a valid methodology that the conscious mind has already accepted. But you must first be at that point. Absolutely. For a novice trader to try to become an expert trader through hypnosis is like a novice chess player seeking to become a master through hypnosis.

448 / The New Market Wizard No. Some people lose because they feel they don’t deserve to win, but more people lose because they never perform the basic tasks necessary to become a winning trader. What are those tasks? 1. 2. 3. 4. 5. Develop a competent analytical methodology. Extract a reasonable trading plan from this methodology. Formulate rules for this plan that incorporate money management techniques. Back-test the plan over a sufficiently long period. Exercise self-management so that you adhere to the plan. The best plan in the world cannot work if you don’t act on it. Typically, how do you work with someone who comes to you for help in improving his or her trading? The first thing I do is go though a series of about thirty questions that have only one purpose: finding out if the person has a methodology.

However, the losses from your previous Methodology A are so ingrained in your subconscious that whenever you contemplate making a trade, the adrenaline starts to flow, and the fear of executing a trade arises. Some traders are literally immobilized by this fear at the moment when they need to act. This is the “freeze” that I encountered when I returned to trading years after my first painful experience. If you have truly back-tested a methodology and are employing an effective trading plan, your conscious mind is already aware of its validity. It’s your subconscious mind that prevents you from taking correct action in the market. The problem will persist until you convince the subconscious in a very direct manner that the new methodology is valid and that it has to forget about the old methodology. How is this transformation achieved?


pages: 272 words: 19,172

Hedge Fund Market Wizards by Jack D. Schwager

asset-backed security, backtesting, banking crisis, barriers to entry, beat the dealer, Bernie Madoff, Black-Scholes formula, British Empire, business cycle, buy and hold, Claude Shannon: information theory, cloud computing, collateralized debt obligation, commodity trading advisor, computerized trading, credit crunch, Credit Default Swap, credit default swaps / collateralized debt obligations, delta neutral, diversification, diversified portfolio, Edward Thorp, family office, financial independence, fixed income, Flash crash, hindsight bias, implied volatility, index fund, intangible asset, James Dyson, Jones Act, Long Term Capital Management, margin call, market bubble, market fundamentalism, merger arbitrage, money market fund, oil shock, pattern recognition, pets.com, Ponzi scheme, private sector deleveraging, quantitative easing, quantitative trading / quantitative finance, Right to Buy, risk tolerance, risk-adjusted returns, risk/return, riskless arbitrage, Rubik’s Cube, Sharpe ratio, short selling, statistical arbitrage, Steve Jobs, systematic trading, technology bubble, transaction costs, value at risk, yield curve

Beginning around 1980, I developed a discipline that whenever I put on a trade, I would write down the reasons on a pad. When I liquidated the trade, I would look at what actually happened and compare it with my reasoning and expectations when I put on the trade. Learning solely from actual experience, however, is inadequate because it takes too much time to get a representative sample to determine whether a decision rule works. I discovered that I could backtest the criteria that I wrote down to get a good perspective of how they would have performed and to refine them. The next step was to define decision rules based on the criteria. I required the decision rules to be logically based and was careful to avoid data mining. That’s how the Bridgewater system began and developed in the early years. That same process continued and was improved with the help of many others over the years.

But then the lab got crowded, and I had to give up one. I started thinking, This place is going to empty out sometime tonight. I decided to get all my data prepared so that I could simultaneously use many of the lab’s computers that night. I was very excited about the idea. People started to leave, and then I had two computers, then four, and eventually I was jumping between 20 computers running my backtests. Were you testing your system on one market on each computer? That is exactly what I was doing. I was so excited about the results I was getting that I worked all night and continued through the next day. It was going so well that I pulled a second all-nighter. I worked for nearly 40 hours straight, keeping myself awake with the caffeine from drinking a Pepsi every hour. I was still living on the farm at the time.

But as assets under management increased, and I realized it was best to use the same models across all markets, I added substantially more markets to the portfolio. The transition to greater diversification also helped improve performance. By 1994, I was trading about 20 markets, and I was no longer using market-specific models. Those changes made a big difference. When you were only trading two or three markets, how did you decide which markets to trade? That was part of the problem. I was cherry-picking the markets that looked best in backtesting. It sounds like you were still making some rookie curve-fitting mistakes at that time. Absolutely. I was still making some very bad data mining errors in those initial years. Was the system you were using at Blue Ridge after you switched to using the same models on all markets an early version of what you ended up doing at QIM? It was similar, but much less sophisticated—fewer models generated with far less computing power.


pages: 407 words: 104,622

The Man Who Solved the Market: How Jim Simons Launched the Quant Revolution by Gregory Zuckerman

affirmative action, Affordable Care Act / Obamacare, Albert Einstein, Andrew Wiles, automated trading system, backtesting, Bayesian statistics, beat the dealer, Benoit Mandelbrot, Berlin Wall, Bernie Madoff, blockchain, Brownian motion, butter production in bangladesh, buy and hold, buy low sell high, Claude Shannon: information theory, computer age, computerized trading, Credit Default Swap, Daniel Kahneman / Amos Tversky, diversified portfolio, Donald Trump, Edward Thorp, Elon Musk, Emanuel Derman, endowment effect, Flash crash, George Gilder, Gordon Gekko, illegal immigration, index card, index fund, Isaac Newton, John Meriwether, John Nash: game theory, John von Neumann, Loma Prieta earthquake, Long Term Capital Management, loss aversion, Louis Bachelier, mandelbrot fractal, margin call, Mark Zuckerberg, More Guns, Less Crime, Myron Scholes, Naomi Klein, natural language processing, obamacare, p-value, pattern recognition, Peter Thiel, Ponzi scheme, prediction markets, quantitative hedge fund, quantitative trading / quantitative finance, random walk, Renaissance Technologies, Richard Thaler, Robert Mercer, Ronald Reagan, self-driving car, Sharpe ratio, Silicon Valley, sovereign wealth fund, speech recognition, statistical arbitrage, statistical model, Steve Jobs, stochastic process, the scientific method, Thomas Bayes, transaction costs, Turing machine

A golden age for traditional investing had dawned as George Soros, Peter Lynch, Bill Gross, and others divined the direction of investments, financial markets, and global economies, producing enormous profits with intelligence, intuition, and old-fashioned economic and corporate research. Unlike his rivals, Simons didn’t have a clue how to estimate cash flows, identify new products, or forecast interest rates. He was digging through reams of price information. There wasn’t even a proper name for this kind of trading, which involved data cleansing, signals, and backtesting, terms most Wall Street pros were wholly unfamiliar with. Few used email in 1990, the internet browser hadn’t been invented, and algorithms were best known, if at all, as the step-by-step procedures that had enabled Alan Turing’s machine to break coded Nazi messages during World War II. The idea that these formulas might guide, or even help govern, the day-to-day lives of hundreds of millions of individuals, or that a couple of former math professors might employ computers to trounce seasoned and celebrated investors, seemed far-fetched if not outright ludicrous.

The roots of Simons’s investing style reached as far back as Babylonian times, when early traders recorded the prices of barley, dates, and other crops on clay tablets, hoping to forecast future moves. In the middle of the sixteenth century, a trader in Nuremberg, Germany, named Christopher Kurz won acclaim for his supposed ability to forecast twenty-day prices of cinnamon, pepper, and other spices. Like much of society at the time, Kurz relied on astrological signs, but he also tried to back-test his signals, deducing certain credible principles along the way, such as the fact that prices often move in long-persisting trends. An eighteenth-century Japanese rice merchant and speculator named Munehisa Homma, known as the “god of the markets,” invented a charting method to visualize the open, high, low, and closing price levels for the country’s rice exchanges over a period of time. Homma’s charts, including the classic candlestick pattern, resulted in an early and reasonably sophisticated reversion-to-the-mean trading strategy.

See Statistical arbitrage Archimedes (yacht), 267, 320 Armstrong, Neil, 170 Artin, Emil, 69 Asness, Clifford, 256–57 Association for Computing Machinery (ACM), 37 astrology, 121–22 autism, xviii, 268, 287, 323–24 Automated Proprietary Trading (APT), 131–32, 133 AWK, 233–34 Ax, Frances, 98 Ax, James, xi, 37, 68–69, 324 at Axcom Limited, 78–83 backgammon, 69, 76–77 background of, 68–69 at Berkeley, 68–69 Berlekamp and, 95–102 conspiracy theories of, 77–78, 99 at Cornell, 69, 70–71 death of, 103 focus on mathematics, 69–70 at Monemetrics, 51–52, 72–73 personality of, 68, 70, 71–72, 98–99 Simons and, 34, 68–69, 99–103, 107 at Stony Brook, 34, 71–72 trading models, 73, 74–75, 77–78, 81–86, 95–101, 107 Axcom Limited, 78–83 disbanding of, 118 trading models, 95–101, 107–18 Ax-Kochen theorem, 69, 70, 103 Bachelier, Louis, 128 backgammon, 69, 76–77 backtesting, 3 Bacon, Louis, 140 Baker House, 15–16 Baltimore City Fire and Police Employees’ Retirement System, 299–300 Bamberger, Gerry, 129–30 BankAmerica Corporation, 212 Bannon, Steve, 279, 280, 280n break with Mercers, 304 at Breitbart, 278–79, 299–300, 301–2 midterm elections of 2018, 304 presidential election of 2016, xviii, 281–82, 284–85, 288–90, 293, 294–95 Barclays Bank, 225, 259 bars, 143–44 Barton, Elizabeth, 272 basket options, 225–27 Baum, Julia Lieberman, 46, 48, 50, 62–63, 65 Baum, Leonard “Lenny,” xi, 45–46, 63–66 background of, 46 currency trading, 28–29, 49–53, 54–60, 62–64, 73 death of, 66 at Harvard, 46 at IDA, 25, 28–29, 46–49, 81 at Monemetrics, 45, 49–60, 63–65 move to Bermuda, 64–65 rift with Simons, 63–65 trading debacle of 1984, 65, 66 Baum, Morris, 46 Baum, Stefi, 48, 62, 63 Baum–Welch algorithm, 47–48, 174, 179 Bayes, Thomas, 174 Bayesian probability, 148, 174 Beane, Billy, 308 Beat the Dealer (Thorp), 127, 163 Beautiful Mind, A (Nasar), 90 behavioral economics, 152, 153 Bell Laboratories, 91–92 Belopolsky, Alexander, 233, 238, 241, 242, 252–54 Bent, Bruce, 173 Berkeley Quantitative, 118 Berkshire Hathaway, 265, 309, 333 Berlekamp, Elwyn, xi at Axcom, 94–97, 102–3, 105–18 background of, 87–90 at Bell Labs, 91–92 at Berkeley, 92–93, 95, 115, 118, 272 at Berkeley Quantitative, 118 death of, 118 at IDA, 93–94 Kelly formula and, 91–92, 96, 127 at MIT, 89–91 Simons and, 2–3, 4, 93–95, 109–10, 113–14, 116–18, 124 trading models and strategies, 2–3, 4, 95–98, 106–18, 317 Berlekamp, Jennifer Wilson, 92 Berlekamp, Waldo, 87–88 Berlin Wall, 164 Bermuda, 64–65, 254 Bernard L.


pages: 825 words: 228,141

MONEY Master the Game: 7 Simple Steps to Financial Freedom by Tony Robbins

3D printing, active measures, activist fund / activist shareholder / activist investor, addicted to oil, affirmative action, Affordable Care Act / Obamacare, Albert Einstein, asset allocation, backtesting, bitcoin, buy and hold, clean water, cloud computing, corporate governance, corporate raider, correlation does not imply causation, Credit Default Swap, Dean Kamen, declining real wages, diversification, diversified portfolio, Donald Trump, estate planning, fear of failure, fiat currency, financial independence, fixed income, forensic accounting, high net worth, index fund, Internet of things, invention of the wheel, Jeff Bezos, Kenneth Rogoff, lake wobegon effect, Lao Tzu, London Interbank Offered Rate, market bubble, money market fund, mortgage debt, new economy, obamacare, offshore financial centre, oil shock, optical character recognition, Own Your Own Home, passive investing, profit motive, Ralph Waldo Emerson, random walk, Ray Kurzweil, Richard Thaler, risk tolerance, riskless arbitrage, Robert Shiller, Robert Shiller, self-driving car, shareholder value, Silicon Valley, Skype, Snapchat, sovereign wealth fund, stem cell, Steve Jobs, survivorship bias, telerobotics, the rule of 72, thinkpad, transaction costs, Upton Sinclair, Vanguard fund, World Values Survey, X Prize, Yogi Berra, young professional, zero-sum game

Which is why I wasn’t the least bit surprised to learn later that he and his wife, Barbara, have signed the Giving Pledge—a commitment by the world’s wealthiest individuals, from Bill Gates to Warren Buffett, to give away the majority of their wealth through philanthropy. DO I HAVE YOUR ATTENTION NOW? When my own investment team showed me the back-tested performance numbers of this All Seasons portfolio, I was astonished. I will never forget it. I was sitting with my wife at dinner and received a text message from my personal advisor, Ajay Gupta, that read, “Did you see the email with the back-tested numbers on the portfolio that Ray Dalio shared with you? Unbelievable!” Ajay normally doesn’t text me at night, so I knew he couldn’t wait to share. As soon as our dinner date was over I grabbed my phone and opened the email . . . CHAPTER 5.2 IT’S TIME TO THRIVE: STORM-PROOF RETURNS AND UNRIVALED RESULTS * * * If no mistake have you made, yet losing you are . . . a different game you should play.

By using real fund data as opposed to theoretical data from a constructed index, all the returns listed in this chapter are fully inclusive of annual fund fees and any tracking error present in the underlying funds. This has the benefit of showing you realistic historic returns for the All Seasons portfolio (as opposed to theoretical returns that are sometimes used in back-testing). This insures that the investment holdings and numbers used in back-testing this portfolio were and are accessible to the everyday man on the street and not only available to multibillion-dollar Wall Street institutions. Where they were unable to use actual index fund data because the funds didn’t exist at that time, they used broadly diversified index data for each asset class and adjusted the returns for fund fees. Note that they used annual rebalancing in the calculations and assumed that the investments were held in a tax-free account with no transaction costs.

If you want to take immediate action to minimize fees and have an advisor assist you in allocating your 401(k) fund choices, you can use the service at Stronghold (www.strongholdfinancial.com), which, with the click of a button will automatically “peer into” your 401(k) and provide a complimentary asset allocation. In addition, many people think there aren’t many alternatives to TDFs, but in section 5, you’ll learn a specific asset allocation from hedge fund guru Ray Dalio that has produced extraordinary returns with minimal downside. When a team of analysts back-tested the portfolio, the worst loss was just 3.93% in the last 75 years. In contrast, according to MarketWatch, “the most conservative target-date retirement funds—those designed to produce income—fell on average 17% in 2008, and the riskiest target-date retirement funds—designed for those retiring in 2055—fell on average a whopping 39.8%, according to a recent report from Ibbotson Associates.” ANOTHER ONE BITES THE DUST We have exposed and conquered yet another myth together.


pages: 472 words: 117,093

Machine, Platform, Crowd: Harnessing Our Digital Future by Andrew McAfee, Erik Brynjolfsson

"Robert Solow", 3D printing, additive manufacturing, AI winter, Airbnb, airline deregulation, airport security, Albert Einstein, Amazon Mechanical Turk, Amazon Web Services, artificial general intelligence, augmented reality, autonomous vehicles, backtesting, barriers to entry, bitcoin, blockchain, British Empire, business cycle, business process, carbon footprint, Cass Sunstein, centralized clearinghouse, Chris Urmson, cloud computing, cognitive bias, commoditize, complexity theory, computer age, creative destruction, crony capitalism, crowdsourcing, cryptocurrency, Daniel Kahneman / Amos Tversky, Dean Kamen, discovery of DNA, disintermediation, disruptive innovation, distributed ledger, double helix, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, Ethereum, ethereum blockchain, everywhere but in the productivity statistics, family office, fiat currency, financial innovation, George Akerlof, global supply chain, Hernando de Soto, hive mind, information asymmetry, Internet of things, inventory management, iterative process, Jean Tirole, Jeff Bezos, jimmy wales, John Markoff, joint-stock company, Joseph Schumpeter, Kickstarter, law of one price, longitudinal study, Lyft, Machine translation of "The spirit is willing, but the flesh is weak." to Russian and back, Marc Andreessen, Mark Zuckerberg, meta analysis, meta-analysis, Mitch Kapor, moral hazard, multi-sided market, Myron Scholes, natural language processing, Network effects, new economy, Norbert Wiener, Oculus Rift, PageRank, pattern recognition, peer-to-peer lending, performance metric, plutocrats, Plutocrats, precision agriculture, prediction markets, pre–internet, price stability, principal–agent problem, Ray Kurzweil, Renaissance Technologies, Richard Stallman, ride hailing / ride sharing, risk tolerance, Ronald Coase, Satoshi Nakamoto, Second Machine Age, self-driving car, sharing economy, Silicon Valley, Skype, slashdot, smart contracts, Snapchat, speech recognition, statistical model, Steve Ballmer, Steve Jobs, Steven Pinker, supply-chain management, TaskRabbit, Ted Nelson, The Market for Lemons, The Nature of the Firm, Thomas Davenport, Thomas L Friedman, too big to fail, transaction costs, transportation-network company, traveling salesman, Travis Kalanick, two-sided market, Uber and Lyft, Uber for X, uber lyft, ubercab, Watson beat the top human players on Jeopardy!, winner-take-all economy, yield management, zero day

The company faced the daunting task of building a technology platform for quants comparable to the ones within the industry’s top companies. Such a platform had to be able to let investors upload their algorithms, then quickly test them under different market conditions—booms and recessions, periods of high and low interest rates, and so on. One way to do this is to “back-test” the algorithms with historical data. Fawcett and his colleagues worked to build a backtester as robust as those available within large institutional investors. The startup also had to let investors accurately assess the market impact of their trades—the fact that if they bought or sold a large amount of an asset, that action would itself change the asset’s price. Assessing market impact is a tricky exercise in estimation, one that consumed a lot of time at Quantopian.

We find Quantopian fascinating because it illustrates all three of the technology trends that are reshaping the business world. It’s bringing together minds and machines in fresh ways to rethink how investment decisions are made, and substituting data and code for human experience, judgment, and intuition. It’s also building a platform for quantitative investment rather than introducing a specific product (such as a backtester). This platform is open and noncredentialist, aims to take advantage of network effects (the more good investment algorithms it holds, the more capital it will attract; the more capital it holds, the more algo traders it will attract), and to provide a smooth interface and experience to its traders. And it’s bringing together an online crowd to challenge the core and its experts in a large and critically important industry.


pages: 517 words: 139,477

Stocks for the Long Run 5/E: the Definitive Guide to Financial Market Returns & Long-Term Investment Strategies by Jeremy Siegel

Asian financial crisis, asset allocation, backtesting, banking crisis, Black-Scholes formula, break the buck, Bretton Woods, business cycle, buy and hold, buy low sell high, California gold rush, capital asset pricing model, carried interest, central bank independence, cognitive dissonance, compound rate of return, computer age, computerized trading, corporate governance, correlation coefficient, Credit Default Swap, Daniel Kahneman / Amos Tversky, Deng Xiaoping, discounted cash flows, diversification, diversified portfolio, dividend-yielding stocks, dogs of the Dow, equity premium, Eugene Fama: efficient market hypothesis, eurozone crisis, Everybody Ought to Be Rich, Financial Instability Hypothesis, fixed income, Flash crash, forward guidance, fundamental attribution error, housing crisis, Hyman Minsky, implied volatility, income inequality, index arbitrage, index fund, indoor plumbing, inflation targeting, invention of the printing press, Isaac Newton, joint-stock company, London Interbank Offered Rate, Long Term Capital Management, loss aversion, market bubble, mental accounting, money market fund, mortgage debt, Myron Scholes, new economy, Northern Rock, oil shock, passive investing, Paul Samuelson, Peter Thiel, Ponzi scheme, prediction markets, price anchoring, price stability, purchasing power parity, quantitative easing, random walk, Richard Thaler, risk tolerance, risk/return, Robert Gordon, Robert Shiller, Robert Shiller, Ronald Reagan, shareholder value, short selling, Silicon Valley, South Sea Bubble, sovereign wealth fund, stocks for the long run, survivorship bias, technology bubble, The Great Moderation, the payments system, The Wisdom of Crowds, transaction costs, tulip mania, Tyler Cowen: Great Stagnation, Vanguard fund

Index Options Buying Index Options Selling Index Options The Importance of Indexed Products Chapter 19 Market Volatility The Stock Market Crash of October 1987 The Causes of the October 1987 Crash Exchange Rate Policies The Futures Market Circuit Breakers Flash Crash—May 6, 2010 The Nature of Market Volatility Historical Trends of Stock Volatility The Volatility Index The Distribution of Large Daily Changes The Economics of Market Volatility The Significance of Market Volatility Chapter 20 Technical Analysis and Investing with the Trend The Nature of Technical Analysis Charles Dow, Technical Analyst The Randomness of Stock Prices Simulations of Random Stock Prices Trending Markets and Price Reversals Moving Averages Testing the Dow Jones Moving-Average Strategy Back-Testing the 200-Day Moving Average Avoiding Major Bear Markets Distribution of Gains and Losses Momentum Investing Conclusion Chapter 21 Calendar Anomalies Seasonal Anomalies The January Effect Causes of the January Effect The January Effect Weakened in Recent Years Large Stock Monthly Returns The September Effect Other Seasonal Returns Day-of-the-Week Effects What’s an Investor to Do?

Only in recent years has the exact intraday level of the popular averages been computed. Using historical data, it is impossible to determine times when the market average penetrated the 200-day moving average during the day. By specifying that the average must close above or below the average of the two hundred preceding closes, I present a theory that could have been implemented in practice through the whole time period.12 Back-Testing the 200-Day Moving Average Figure 20-2 displays the daily and 200-day moving averages of the Dow Jones Industrial Average during two select periods: from 1924 to 1936 and 2001 to 2012. The time periods when investors are out of the stock market (and in short-term bonds) are shaded; otherwise, investors are fully invested in stocks. FIGURE 20-2 Dow-Jones Industrials and the 200-Day Moving-Average Strategy The returns from the 200-day moving-average strategy and a buy-and-hold strategy over the whole period are summarized in Table 20-1.

See also Calendar anomalies, 336 The Monetary History of the United States , 33–34, 212 Monetary policy capital gains taxes in, 226 conclusions about, 226–227 corporate taxes, 224–225 Federal Reserve and, 213–214, 217 gold standard and, 213–217 inflation in, 220–226 interest rates in, 223, 225–226 introduction to, 209–210 market outbreaks and, 244 money in circulation and, 210–213 postdevaluation, 215–216 postgold, 216–217 prices in, 210–213 stocks and inflation in, 220–226 supply-side effects in, 223–224 Money creation, 217 Money in circulation, 210–213 Money managers, 363–364 Money Market Investor Funding Facility, 33 Monsanto, 205 Moody’s, 25–26 Moore, Philip, 371 Moore, Randell, 236 Morgan, J.P., 125 Morgan Stanley Capital International (MSCI), 13, 42, 330–332, 371–372 Morris, David, 371 Mortgage-backed securities, 25–31 Moving averages 200-day, 318–320 back-testing, 318 bear market avoidance in, 320–321 Dow Jones strategy for, 317–318 gains/losses distributions in, 321–322 introduction to, 316 testing, 317–318 MSCI (Morgan Stanley Capital International), 13, 42, 330–332, 371–372 Mutual funds from 1995–2012, 272 equity, 358–363 ETFs vs., 273 index, 282–284 Mutual Shares Z Fund, 362 Myopic loss aversion, 350–352 Nabisco Group Holdings, 129–130 NAR (National Association of Realtors), 29 Nasdaq in 1999, 16–17 global investing and, 200–201 introduction to, 105 New York Stock Exchange and, 111–113 S&P 500 Index and, 120 National Association of Realtors (NAR), 29 National Association of Securities Dealers Automated Quotations (Nasdaq).


pages: 516 words: 157,437

Principles: Life and Work by Ray Dalio

Albert Einstein, asset allocation, autonomous vehicles, backtesting, cognitive bias, Deng Xiaoping, diversification, Elon Musk, follow your passion, hiring and firing, iterative process, Jeff Bezos, Long Term Capital Management, margin call, microcredit, oil shock, performance metric, planetary scale, quantitative easing, risk tolerance, Ronald Reagan, Silicon Valley, Steve Jobs, transaction costs, yield curve

If I didn’t have these systems, I’d probably be broke or dead from the stress of trying so hard. We certainly wouldn’t have done as well in the markets as we have. As you will see later, I am now developing similar systems to help us make management decisions. I believe one of the most valuable things you can do to improve your decision making is to think through your principles for making decisions, write them out in both words and computer algorithms, back-test them if possible, and use them on a real-time basis to run in parallel with your brain’s decision making. But I’m getting ahead of myself. Let’s go back to 1983. RESURRECTING BRIDGEWATER By late 1983, Bridgewater had six employees. Up until then, I hadn’t done any marketing; the business we got came from word of mouth and from people reading my daily telexes and seeing my public appearances.

Because we traded a number of different asset classes, and within each one we had programmed and tested lots of fundamental trading rules, we had many more high-quality ones to choose from than a typical manager who was tracking a smaller number of assets and was probably not trading systematically. I worked with Bob and Dan to pull our best decision rules from the pile. Once we had them, we back-tested them over long time frames, using the systems to simulate how the decision rules would have worked together in the past. We were startled by the results. On paper, this new approach improved our returns by a factor of three to five times per unit of risk, and we could calibrate the amount of return we wanted based on the amount of risk we could tolerate. In other words, we could make a ton more money than the other guys, with a lower risk of being knocked out of the game—as I’d nearly been before.

We hadn’t looked at that before, because we’d never had that much money. We quickly discovered that if we just tweaked what we did and created a new fund that managed money the same way as Pure Alpha but invested it solely in the most liquid markets, our expected returns would be the same and the expected risk (i.e., volatility) only slightly higher. We programmed this new approach into our computers, back-tested it to see how it worked in all countries and time frames, and explained it to our clients in detail so they could thoroughly understand the logic behind it. As much as I love and have benefited from artificial intelligence, I believe that only people can discover such things and then program computers to do them. That’s why I believe that the right people, working with each other and with computers, are the key to success.


pages: 312 words: 91,538

The Fear Index by Robert Harris

algorithmic trading, backtesting, banking crisis, dark matter, family office, Fellow of the Royal Society, fixed income, Flash crash, God and Mammon, high net worth, implied volatility, mutually assured destruction, Neil Kinnock, Renaissance Technologies, speech recognition

There is also what we call a “clinging” effect, when a stock is held in defiance of reason, and an “adrenalin” effect, when a stock rises strongly in value. We’re still researching all these various categories to determine market impact and refine our model.’ Easterbrook raised his hand. ‘Yes, Bill?’ ‘Is this algorithm already operational?’ ‘Why don’t I let Hugo answer that, as it’s practical rather than theoretical?’ Quarry said, ‘Incubation started back-testing VIXAL-1 almost two years ago, although naturally that was just a simulation, without any actual exposure to the market. We went live with VIXAL-2 in May 2009, with play money of one hundred million dollars. When we overcame the early teething problems we moved on to VIXAL-3 in November and gave it access to one billion. That was so successful we decided to allow VIXAL-4 to take control of the entire fund one week ago.’

Then Quarry had hit the road of investors’ conferences, moving from city to city in the US and across Europe, pulling his wheeled suitcase through fifty different airports. He had loved this part – loved being a salesman, he who travels alone, walking in cold to an air-conditioned conference room in a strange hotel overlooking some sweltering freeway and charming a sceptical audience. His method was to show them the independently back-tested results of Hoffmann’s algorithm and the mouth-watering projections of future returns, then break it to them that the fund was already closed: he had only fulfilled his engagement to speak in order to be polite but they didn’t need any more money, sorry. Afterwards the investors would come looking for him in the hotel bar; it worked nearly every time. Quarry had hired a guy from BNP Paribas to oversee the back office, a receptionist, a secretary, and a French fixed-income trader from AmCor who had run into some regulatory issues and needed to get out of London fast.


pages: 89 words: 29,198

The Big Secret for the Small Investor: A New Route to Long-Term Investment Success by Joel Greenblatt

backtesting, discounted cash flows, diversified portfolio, hiring and firing, index fund, Sharpe ratio, time value of money, Vanguard fund

Fortunately, there’s plenty of evidence that they do. Although various methods of weighting portfolios by economic fundamentals have been around since the late 1980s, in recent years fundamental indexes researched and constructed by Research Affiliates and WisdomTree Investments have become some of the most popular and widely accessible. In fact, Rob Arnott and his team at Research Affiliates has backtested (and in recent years made available) an index that has outperformed the market-cap-weighted S&P 500 by approximately 2 percent per year since 1962 (before expenses).7 The Research Affiliates index (known as the FTSE RAFI 1000 Index) was constructed using a five-year average of company cash flows, sales, dividends, and book value. Companies were ranked and weighted based on a combination of these four characteristics of economic size, and the largest one thousand were purchased.


pages: 752 words: 131,533

Python for Data Analysis by Wes McKinney

backtesting, cognitive dissonance, crowdsourcing, Debian, Firefox, Google Chrome, Guido van Rossum, index card, random walk, recommendation engine, revision control, sentiment analysis, Sharpe ratio, side project, sorting algorithm, statistical model, type inference

First, I’ll load historical prices for a portfolio of financial and technology stocks: names = ['AAPL', 'GOOG', 'MSFT', 'DELL', 'GS', 'MS', 'BAC', 'C'] def get_px(stock, start, end): return web.get_data_yahoo(stock, start, end)['Adj Close'] px = DataFrame({n: get_px(n, '1/1/2009', '6/1/2012') for n in names}) We can easily plot the cumulative returns of each stock (see Figure 11-2): In [117]: px = px.asfreq('B').fillna(method='pad') In [118]: rets = px.pct_change() In [119]: ((1 + rets).cumprod() - 1).plot() For the portfolio construction, we’ll compute momentum over a certain lookback, then rank in descending order and standardize: def calc_mom(price, lookback, lag): mom_ret = price.shift(lag).pct_change(lookback) ranks = mom_ret.rank(axis=1, ascending=False) demeaned = ranks - ranks.mean(axis=1) return demeaned / demeaned.std(axis=1) With this transform function in hand, we can set up a strategy backtesting function that computes a portfolio for a particular lookback and holding period (days between trading), returning the overall Sharpe ratio: compound = lambda x : (1 + x).prod() - 1 daily_sr = lambda x: x.mean() / x.std() def strat_sr(prices, lb, hold): # Compute portfolio weights freq = '%dB' % hold port = calc_mom(prices, lb, lag=1) daily_rets = prices.pct_change() # Compute portfolio returns port = port.shift(1).resample(freq, how='first') returns = daily_rets.resample(freq, how=compound) port_rets = (port * returns).sum(axis=1) return daily_sr(port_rets) * np.sqrt(252 / hold) Figure 11-2.

Cumulative returns for each of the stocks When called with the prices and a parameter combination, this function returns a scalar value: In [122]: strat_sr(px, 70, 30) Out[122]: 0.27421582756800583 From there, you can evaluate the strat_sr function over a grid of parameters, storing them as you go in a defaultdict and finally putting the results in a DataFrame: from collections import defaultdict lookbacks = range(20, 90, 5) holdings = range(20, 90, 5) dd = defaultdict(dict) for lb in lookbacks: for hold in holdings: dd[lb][hold] = strat_sr(px, lb, hold) ddf = DataFrame(dd) ddf.index.name = 'Holding Period' ddf.columns.name = 'Lookback Period' To visualize the results and get an idea of what’s going on, here is a function that uses matplotlib to produce a heatmap with some adornments: import matplotlib.pyplot as plt def heatmap(df, cmap=plt.cm.gray_r): fig = plt.figure() ax = fig.add_subplot(111) axim = ax.imshow(df.values, cmap=cmap, interpolation='nearest') ax.set_xlabel(df.columns.name) ax.set_xticks(np.arange(len(df.columns))) ax.set_xticklabels(list(df.columns)) ax.set_ylabel(df.index.name) ax.set_yticks(np.arange(len(df.index))) ax.set_yticklabels(list(df.index)) plt.colorbar(axim) Calling this function on the backtest results, we get Figure 11-3: In [125]: heatmap(ddf) Figure 11-3. Heatmap of momentum strategy Sharpe ratio (higher is better) over various lookbacks and holding periods Future Contract Rolling A future is an ubiquitous form of derivative contract; it is an agreement to take delivery of a certain asset (such as oil, gold, or shares of the FTSE 100 index) on a particular date. In practice, modeling and trading futures contracts on equities, currencies, commodities, bonds, and other asset classes is complicated by the time-limited nature of each contract.


Stocks for the Long Run, 4th Edition: The Definitive Guide to Financial Market Returns & Long Term Investment Strategies by Jeremy J. Siegel

addicted to oil, asset allocation, backtesting, Black-Scholes formula, Bretton Woods, business cycle, buy and hold, buy low sell high, California gold rush, capital asset pricing model, cognitive dissonance, compound rate of return, correlation coefficient, Daniel Kahneman / Amos Tversky, diversification, diversified portfolio, dividend-yielding stocks, dogs of the Dow, equity premium, Eugene Fama: efficient market hypothesis, Everybody Ought to Be Rich, fixed income, German hyperinflation, implied volatility, index arbitrage, index fund, Isaac Newton, joint-stock company, Long Term Capital Management, loss aversion, market bubble, mental accounting, Myron Scholes, new economy, oil shock, passive investing, Paul Samuelson, popular capitalism, prediction markets, price anchoring, price stability, purchasing power parity, random walk, Richard Thaler, risk tolerance, risk/return, Robert Shiller, Robert Shiller, Ronald Reagan, shareholder value, short selling, South Sea Bubble, stocks for the long run, survivorship bias, technology bubble, The Great Moderation, The Wisdom of Crowds, transaction costs, tulip mania, Vanguard fund

262 Index Options 264 Buying Index Options 266 Selling Index Options 267 The Importance of Indexed Products 267 Chapter 16 Market Volatility 269 The Stock Market Crash of October 1987 271 The Causes of the October 1987 Crash 273 Exchange-Rate Policies 274 The Futures Market 275 Circuit Breakers 276 The Nature of Market Volatility 277 Historical Trends of Stock Volatility 278 The Volatility Index (VIX) 281 Recent Low Volatility 283 The Distribution of Large Daily Changes 283 The Economics of Market Volatility 285 The Significance of Market Volatility 286 Chapter 17 Technical Analysis and Investing with the Trend 289 The Nature of Technical Analysis 289 Charles Dow, Technical Analyst 290 The Randomness of Stock Prices 291 Simulations of Random Stock Prices 292 Trending Markets and Price Reversals 294 Moving Averages 295 Testing the Dow Jones Moving-Average Strategy 296 Back-Testing the 200-Day Moving Average 297 The Nasdaq Moving-Average Strategy 300 CONTENTS CONTENTS xiii Distribution of Gains and Losses 301 Momentum Investing 302 Conclusion 303 Chapter 18 Calendar Anomalies 305 Seasonal Anomalies 306 The January Effect 306 Causes of the January Effect 309 The January Effect Weakened in Recent Years 310 Large Monthly Returns 311 The September Effect 311 Other Seasonal Returns 315 Day-of-the-Week Effects 316 What’s an Investor to Do?

.: Dow Jones-Irwin, 1988. 11 Historically, the daily high and low levels of stock averages were calculated on the basis of the highest or lowest price of each stock reached at any time during the day. This is called the theoretical high or low. The actual high is the highest level reached at any given time by the stocks in the average. CHAPTER 17 Technical Analysis and Investing with the Trend 297 Back-Testing the 200-Day Moving Average In Figure 17-2 are the daily and 200-day moving averages of the Dow Jones Industrial Average during two select periods: from 1924 to 1936 and 1999 to 2006. The time periods when investors are out of the stock market are shaded; otherwise, investors are fully invested in stocks. Over the entire 120-year history of the Dow Jones average, the 200day moving-average strategy had its greatest triumph during the boom and crash of the 1920s and early 1930s.


Mastering Private Equity by Zeisberger, Claudia,Prahl, Michael,White, Bowen, Michael Prahl, Bowen White

asset allocation, backtesting, barriers to entry, Basel III, business process, buy low sell high, capital controls, carried interest, commoditize, corporate governance, corporate raider, correlation coefficient, creative destruction, discounted cash flows, disintermediation, disruptive innovation, distributed generation, diversification, diversified portfolio, family office, fixed income, high net worth, information asymmetry, intangible asset, Lean Startup, market clearing, passive investing, pattern recognition, performance metric, price mechanism, profit maximization, risk tolerance, risk-adjusted returns, risk/return, shareholder value, Sharpe ratio, Silicon Valley, sovereign wealth fund, statistical arbitrage, time value of money, transaction costs

Free Press. Notes 1 Please refer to Chapter 17 Fundraising for further details. 2 PE returns were calculated based on the INSEAD-Pevara dataset of 3,000 funds using quarterly modified internal rates of return. (http://centres.insead.edu/global-private-equity-initiative/research-publications/private-equity-navigator.cfm). Source for the MSCI World Index: Bloomberg. 3 Investors typically run extensive back-tests to explore how any new asset class might have impacted the portfolio performance historically. 4 Please refer to Chapter 24 Private Equity Secondaries for additional detail on the mechanics of secondaries. 5 Please refer to Chapter 1 Private Equity Essentials for further details on the J-curve. 6 The introduction of Basel III, the Dodd–Frank Act and the Volcker Rule for banks and Solvency II for insurance companies (in Europe) increased the cost of owning stakes in alternative investments. 7 Please refer to Chapter 19 Performance Reporting for a detailed description of the comparison of PE performance with other asset classes. 8 Refer to Chapter 18 LP Portfolio Management for a full overview of how LPs construct a portfolio. 9 Source: Greenhill Cogent 10 The number refers to professional fund management firms; but estimates vary widely on the number of active fund managers (source: Preqin 2016). 11 Before 2008, underlying companies were valued at cost until realization at exit, meaning that valuations of funds showed no volatility prior to the sale of the portfolio companies.

Also referred to as the articles of incorporation, certificate of incorporation, and other names in different jurisdictions. As-converted Basis A metric to determine the total equity base by assuming that all preferred shares have been converted into common shares based on a prespecified conversion ratio. Auction A sales process involving multiple competing parties to maximize the price for the seller. Back-testing Process of applying an allocation or trading strategy to historical data to gauge the effect on portfolio or investment performance. Bankruptcy Legal status in which an insolvent company (which cannot fully repay its debt) is declared bankrupt, typically by court order. Base Case Financial Scenario Scenario based on the company's expected/most likely operating performance. BidCo The legal entity that executes the acquisition of a target company.


pages: 356 words: 51,419

The Little Book of Common Sense Investing: The Only Way to Guarantee Your Fair Share of Stock Market Returns by John C. Bogle

asset allocation, backtesting, buy and hold, creative destruction, diversification, diversified portfolio, financial intermediation, fixed income, index fund, invention of the wheel, Isaac Newton, new economy, passive investing, Paul Samuelson, random walk, risk tolerance, risk-adjusted returns, Sharpe ratio, stocks for the long run, survivorship bias, transaction costs, Upton Sinclair, Vanguard fund, William of Occam, yield management, zero-sum game

He also believes in keeping things simple. Bogle is against the widespread practice today of building portfolios that consist of 10–15 asset classes, whose sole purpose is to create complexity to generate fees for greedy asset managers. Keeping it simple means owning stocks and some bonds. It also means not getting too fancy and too carried away by adding fashionable asset classes whose merits are derived from a backtested computer model.” Chapter Seven The Grand Illusion Surprise! The Returns Reported by Mutual Funds Are Rarely Earned by Mutual Fund Investors. IT IS GRATIFYING THAT industry insiders such as Fidelity’s Peter Lynch, former Investment Company Institute (ICI) chairman Jon Fossel, Mad Money’s James Cramer, and AQR’s Clifford Asness agree with me, as you may recall from Chapter 4. The returns earned by the typical equity mutual fund are inevitably inadequate relative to the returns available simply by owning the stock market through an index fund based on the S&P 500.


pages: 161 words: 51,919

What's Your Future Worth?: Using Present Value to Make Better Decisions by Peter Neuwirth

backtesting, big-box store, Black Swan, collective bargaining, discounted cash flows, en.wikipedia.org, Long Term Capital Management, Rubik’s Cube, Skype, the scientific method

Taleb illustrates the above point by making the distinction between Newtonian physics, which was disproved by experiments that led to Einstein’s Theory of Relativity, and the pseudoscience of Astrology where the theory can always be tweaked and expanded to explain any observed behavior or personality characteristic that doesn’t appear to come directly from an individual’s initial chart. I agree with Taleb that econometrics is much more like Astrology than it is like Physics.28 But even worse, this ability to continuously “backtest” and recalibrate one’s models has led vast numbers of otherwise rational people to believe that the econometric models informing The Wall Street Journal pronouncements by “leading economists” regarding the likely future course of the economy are more valid than the daily horoscope, which they dismiss as nonsense. With this as backdrop, it was a tribute to Andy Abel’s brilliance and intellectual honesty that he was able to see beyond the self-delusions of most of his colleagues in the field and look at our consulting project with fresh eyes.


pages: 172 words: 49,890

The Dhandho Investor: The Low-Risk Value Method to High Returns by Mohnish Pabrai

asset allocation, backtesting, beat the dealer, Black-Scholes formula, business intelligence, call centre, cuban missile crisis, discounted cash flows, Edward Thorp, Exxon Valdez, fixed income, hiring and firing, index fund, inventory management, Mahatma Gandhi, merger arbitrage, passive investing, price mechanism, Silicon Valley, time value of money, transaction costs, zero-sum game

Greenblatt does do some adjustments for excess capital, and so on, so it is not a totally raw data dump. Greenblatt goes on to suggest that an investor ought to build a portfolio of about 25 to 30 of these Magic Formula stocks. He recommends buying five to seven of them every two to three months. After a given stock has been held for a year, it is sold and replaced with another one from the updated Magic Formula list. Greenblatt’s back-testing showed that these Magic Formula stocks have generated returns as high as 20 percent to 30 percent annualized. It trounces the S&P 500—with no thinking or analysis required. If we step back and thinks about it, the Magic Formula is effectively an index. But it is the mother of all indexes—an index on steroids. I like to think of it as the Dhandho Index. It is an index that changes more frequently than other indexes, but the investor is better off as a result.


pages: 178 words: 52,637

Quality Investing: Owning the Best Companies for the Long Term by Torkell T. Eide, Lawrence A. Cunningham, Patrick Hargreaves

air freight, Albert Einstein, backtesting, barriers to entry, buy and hold, cashless society, cloud computing, commoditize, Credit Default Swap, discounted cash flows, discovery of penicillin, endowment effect, global pandemic, haute couture, hindsight bias, low cost airline, mass affluent, Network effects, oil shale / tar sands, pattern recognition, shareholder value, smart grid, sovereign wealth fund, supply-chain management

McKinsey & Company, ‘Perspectives on Founder- and Family-Owned Businesses’, 2014; and ‘Family Firms: Business in the blood’, in The Economist, November 2014. Cristina Cruz Serrano and Laura Nuñez Letamendia, ‘Value Creation in Listed European family firms (2001-2010)’, Academy Management Journal (2015). ‘The unsung masters of the oil industry’, The Economist, June 2012. Safilo relisted in December 2005 at over €60/ share, since which point its share price has declined over 80% in absolute terms. Back-testing Credit Suisse’s HOLT data, we found that a strategy of owning eCAP stocks (eCAPs refer to stocks which have sustained superior CFROI over five or more years) would have delivered outperformance of the Global market in 15 out of the last 20 years. This is broadly consistent with the findings of the Goldman Sachs SUSTAIN team, which see an even more consistent long-term pattern of outperformance by first quartile CROCCI companies on a sector relative basis.


pages: 224 words: 13,238

Electronic and Algorithmic Trading Technology: The Complete Guide by Kendall Kim

algorithmic trading, automated trading system, backtesting, commoditize, computerized trading, corporate governance, Credit Default Swap, diversification, en.wikipedia.org, family office, financial innovation, fixed income, index arbitrage, index fund, interest rate swap, linked data, market fragmentation, money market fund, natural language processing, quantitative trading / quantitative finance, random walk, risk tolerance, risk-adjusted returns, short selling, statistical arbitrage, Steven Levy, transaction costs, yield curve

QSG currently provides three major services for their clients: . Factor analyst This stock selection research service leverages over 300 different stock selection indicators maintained and updated for portfolio construction and stock selection. . Virtual research analyst Portfolio managers can use this service to support any disciplined stock selection strategy. This research enables customization of candidate identification criteria, quick screening, backtesting, and quality control. Profiling the Leading Vendors 171 . T-Cost Pro A Web-based transaction cost management product capable of producing detailed analysis of time-stamped executions on a Tþ1 basis. QSG products are designed to help buy-side firms overcome the mediocrity associated with using simple benchmarks such as VWAP to conduct transaction cost analysis. QSG is currently in an ideal position to provide TCA service to buy-side firms and is also working on developing pre-trade analytics to provide additional structure to a growing algorithmic trading market.


pages: 236 words: 77,735

Rigged Money: Beating Wall Street at Its Own Game by Lee Munson

affirmative action, asset allocation, backtesting, barriers to entry, Bernie Madoff, Bretton Woods, business cycle, buy and hold, buy low sell high, California gold rush, call centre, Credit Default Swap, diversification, diversified portfolio, estate planning, fiat currency, financial innovation, fixed income, Flash crash, follow your passion, German hyperinflation, High speed trading, housing crisis, index fund, joint-stock company, money market fund, moral hazard, Myron Scholes, passive investing, Ponzi scheme, price discovery process, random walk, risk tolerance, risk-adjusted returns, risk/return, stocks for the long run, stocks for the long term, too big to fail, trade route, Vanguard fund, walking around money

It is 1990, and Cornelius is juggling debt, a mortgage, and still has to pay off his student loans to clown college. If it were 2010 he would be living at home begging you for the money. Only this time around, taxes can be avoided by investing in an IRA. By 1990, the wind is really at his back, because trades are cheaper from discount brokerage firms that didn’t previously exist, he can defer taxes because IRAs existed after 1974, and it’s easier to get the information to back-test investments, even before the Internet was widely used. 1990–2010 Results In 1990, Cornelius calculates that with his $100 he can buy 2.76 shares of Disney, 1.23 shares of Eastman Kodak, 1.18 shares of IBM, 3.45 shares of Coca-Cola, and 19.23 shares of Philip Morris (Table 1.3). Just like you, his broker doesn’t charge him any commissions. He doesn’t have to pay any taxes when he reinvests the dividends because he’s going to use an IRA.


pages: 192 words: 75,440

Getting a Job in Hedge Funds: An Inside Look at How Funds Hire by Adam Zoia, Aaron Finkel

backtesting, barriers to entry, collateralized debt obligation, commodity trading advisor, Credit Default Swap, credit default swaps / collateralized debt obligations, discounted cash flows, family office, fixed income, high net worth, interest rate derivative, interest rate swap, Long Term Capital Management, merger arbitrage, offshore financial centre, random walk, Renaissance Technologies, risk-adjusted returns, rolodex, short selling, side project, statistical arbitrage, stocks for the long run, systematic trading, unpaid internship, value at risk, yield curve, yield management

SAMPLE JOB SEARCHES To further illustrate what hedge funds look for when hiring various types of risk managers, we thought it would be helpful to include some job specifications from actual searches. Search 1: Hedge Fund Risk Analyst Note: This fund has a director of risk management who is looking for an additional resource (risk analyst) to join his team and develop within the firm. Description • Responsible for periodic report production, including: • Value at risk (VaR) and volatility reporting by portfolio. • Back-testing and historical performance measurement. • Portfolio segmentation analysis. • Factor analysis reporting. • Position level: • Expected return by position. • Risk analysis by position. • Marginal impact. • Relative risk/reward performance: • Stress testing. c07.indd 91 1/10/08 11:08:07 AM 92 Getting a Job in Hedge Funds • Correlation and concentration reporting by name, sector, and industry. • Responsible for the development and maintenance of a risk management database: • Creation of a centralized risk management database repository. • Daily data extraction from trading systems (Eze Castle) and accounting systems (VPM). • Maintenance of a security master and entity master tables. • Sourcing and storage of market pricing information. • Data cleaning and standardization. • Automation of data feeds from the risk management database to other applications (e.g., RiskMetrics) or models. • Supporting portfolio analysis: • Position and portfolio volatility analysis. • Correlation and factor model development. • Relative risk-adjusted performance measurement. • Historical and prospective analysis. • Analysis by position, portfolio, strategy, and so on. • Ad hoc analysis of portfolio.


pages: 249 words: 77,342

The Behavioral Investor by Daniel Crosby

affirmative action, Asian financial crisis, asset allocation, availability heuristic, backtesting, bank run, Black Swan, buy and hold, cognitive dissonance, colonial rule, compound rate of return, correlation coefficient, correlation does not imply causation, Daniel Kahneman / Amos Tversky, diversification, diversified portfolio, Donald Trump, endowment effect, feminist movement, Flash crash, haute cuisine, hedonic treadmill, housing crisis, IKEA effect, impulse control, index fund, Isaac Newton, job automation, longitudinal study, loss aversion, market bubble, market fundamentalism, mental accounting, meta analysis, meta-analysis, Milgram experiment, moral panic, Murray Gell-Mann, Nate Silver, neurotypical, passive investing, pattern recognition, Ponzi scheme, prediction markets, random walk, Richard Feynman, Richard Thaler, risk tolerance, Robert Shiller, Robert Shiller, science of happiness, Shai Danziger, short selling, South Sea Bubble, Stanford prison experiment, Stephen Hawking, Steve Jobs, stocks for the long run, Thales of Miletus, The Signal and the Noise by Nate Silver, tulip mania, Vanguard fund

In ‘Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency’ by Jegadeesh and Titman, we see that from 1965 through 1989, winning stocks continued to outperform losing stocks on average over the next six to 12 months. And the size of the outperformance was sizeable – 1% per month, even after adjusting for return differences owing to other risk factors.127 Indeed, the effects of momentum tend to be pervasive and not limited with respect to market, place or time. Chris Geczy and Mikhail Samonov conducted what is affectionately referred to as the “world’s longest backtest” and found that momentum effects have persisted in the US since 1801!128 Momentum signals have worked well in the UK since the Victorian Age (Chabot, Ghysels and Jagannathan, 2009) and have proven their power and persistence across 40 countries and more than a dozen asset classes.129 So deep-seated are our psychological tendencies toward momentum that, “the momentum premium has been a part of markets since their very existence, well before researchers studied them as a science.”


pages: 302 words: 86,614

The Alpha Masters: Unlocking the Genius of the World's Top Hedge Funds by Maneet Ahuja, Myron Scholes, Mohamed El-Erian

activist fund / activist shareholder / activist investor, Asian financial crisis, asset allocation, asset-backed security, backtesting, Bernie Madoff, Bretton Woods, business process, call centre, collapse of Lehman Brothers, collateralized debt obligation, computerized trading, corporate governance, credit crunch, Credit Default Swap, credit default swaps / collateralized debt obligations, diversification, Donald Trump, en.wikipedia.org, family office, fixed income, high net worth, interest rate derivative, Isaac Newton, Long Term Capital Management, Marc Andreessen, Mark Zuckerberg, merger arbitrage, Myron Scholes, NetJets, oil shock, pattern recognition, Ponzi scheme, quantitative easing, quantitative trading / quantitative finance, Renaissance Technologies, risk-adjusted returns, risk/return, rolodex, short selling, Silicon Valley, South Sea Bubble, statistical model, Steve Jobs, systematic trading, zero-sum game

In January 2008, Dalio forewarned of the dangers of overreliance on tools like historical models during an interview with the Financial Times. “What is the most common mistake of investors?” he warned. “It is believing that things that worked in the past will continue to work and leveraging up to be on it. Nowadays, with the computer, it is easy to identify what would have worked and, with financial engineering, to create overoptimized strategies. I believe we are entering a period that will not be consistent with the back-testing, and problems will arise. When that dynamic exists and there’s close to zero interest rate, we knew that the ability of the central bank to ease monetary policy is limited.” When Dalio looks at the world today, he sees it divided into two parts—debtor-developed deficit countries and emerging market creditor countries. He further breaks it down into countries that have independent currency policies, and those whose currency and interest rate policies are linked.


pages: 317 words: 84,400

Automate This: How Algorithms Came to Rule Our World by Christopher Steiner

23andMe, Ada Lovelace, airport security, Al Roth, algorithmic trading, backtesting, big-box store, Black-Scholes formula, call centre, cloud computing, collateralized debt obligation, commoditize, Credit Default Swap, credit default swaps / collateralized debt obligations, delta neutral, Donald Trump, Douglas Hofstadter, dumpster diving, Flash crash, G4S, Gödel, Escher, Bach, High speed trading, Howard Rheingold, index fund, Isaac Newton, John Markoff, John Maynard Keynes: technological unemployment, knowledge economy, late fees, Marc Andreessen, Mark Zuckerberg, market bubble, medical residency, money market fund, Myron Scholes, Narrative Science, PageRank, pattern recognition, Paul Graham, Pierre-Simon Laplace, prediction markets, quantitative hedge fund, Renaissance Technologies, ride hailing / ride sharing, risk tolerance, Robert Mercer, Sergey Aleynikov, side project, Silicon Valley, Skype, speech recognition, Spread Networks laid a new fibre optics cable between New York and Chicago, transaction costs, upwardly mobile, Watson beat the top human players on Jeopardy!, Y Combinator

Peterffy needed a way to express all of this in one elegant algorithm that rightly weighted each factor. It was a complicated math problem that he found nearly impossible to solve. He cycled through spurts of dejection and inspiration. After working on the problem for more than a year, Peterffy devised an algorithm of differential equations that cleverly weighted all of the ingredients. He back-tested the algorithm to see if it would have made money in the past, but the data sets for commodities options at that point in history were limited. This was before computers handled such things adeptly and, more important, before the options market had much history. So Mocatta did the only thing it could: it started trading with the algorithm. It made money. The options markets weren’t the giant realms they are today, so the algorithm wasn’t able to harvest billions of dollars, but it gave Mocatta’s traders a big edge.


pages: 394 words: 85,252

The New Sell and Sell Short: How to Take Profits, Cut Losses, and Benefit From Price Declines by Alexander Elder

Atul Gawande, backtesting, buy and hold, buy low sell high, Checklist Manifesto, double helix, impulse control, paper trading, short selling, systematic trading, The Wealth of Nations by Adam Smith

Still, it would be incorrect to expect a lower stress level. Riding a trend is like riding a bucking horse that tries to shake you off. Holding on to a trend-following trade requires a great deal of patience and self-assurance—a lot of mental work. Question 7—System vs. Discretionary Trading Answer 3 Selection “A greater degree of freedom” is incorrect. System traders who have done a lot of backtesting can have a fairly high level of confidence knowing what profits or losses to expect down the road. If they have the discipline to follow all the signals of their system, they will lower their stress level, insulating themselves to a degree from uncertainty in the markets. What they give up is the freedom to make decisions as market conditions change, creating new threats or opportunities. Question 8—Technical Toolbox Answer 3 “Five bullets to a clip” allows you to use only five indicators.


pages: 268 words: 81,811

Flash Crash: A Trading Savant, a Global Manhunt, and the Most Mysterious Market Crash in History by Liam Vaughan

algorithmic trading, backtesting, bank run, barriers to entry, Bernie Madoff, Black Swan, Bob Geldof, centre right, collapse of Lehman Brothers, Donald Trump, Elliott wave, eurozone crisis, family office, Flash crash, high net worth, High speed trading, information asymmetry, Jeff Bezos, Kickstarter, margin call, market design, market microstructure, Nick Leeson, offshore financial centre, pattern recognition, Ponzi scheme, Ralph Nelson Elliott, Ronald Reagan, sovereign wealth fund, spectrum auction, Stephen Hawking, the market place, Tobin tax, tulip mania, yield curve, zero-sum game

CHAPTER 17 ◼ MR. X Navinder Sarao’s arrest would ultimately involve an army of people from agencies including the CFTC, the FBI, the Metropolitan Police, and the Department of Justice, but the investigation started with a lone individual with no affiliation to the government. He was a day trader, like Sarao, grinding out a living in a small prop firm in Chicago who, in 2012, happened to be back-testing his system using data from the day of the Flash Crash when he spotted something the whole world had missed. His identity has never been made public. We’ll call him Mr. X. Mr. X is a few years older than Sarao and started his career as part of a pit trading firm’s first forays into electronic trading. For a long time he was at the bottom rung; he struggled to cover the rent. He contemplated whether to pursue a career in his primary love, design.


pages: 345 words: 87,745

The Power of Passive Investing: More Wealth With Less Work by Richard A. Ferri

asset allocation, backtesting, Bernie Madoff, buy and hold, capital asset pricing model, cognitive dissonance, correlation coefficient, Daniel Kahneman / Amos Tversky, diversification, diversified portfolio, endowment effect, estate planning, Eugene Fama: efficient market hypothesis, fixed income, implied volatility, index fund, intangible asset, Long Term Capital Management, money market fund, passive investing, Paul Samuelson, Ponzi scheme, prediction markets, random walk, Richard Thaler, risk tolerance, risk-adjusted returns, risk/return, Sharpe ratio, survivorship bias, too big to fail, transaction costs, Vanguard fund, yield curve, zero-sum game

A benchmark index is also known in the industry as a plain vanilla index and a beta seeking index. What qualifies as an index has broadened over the years as more ETFs come to market that follow highly customized nonstandard index methods. Today, it seems as though anything can be called an index. An index provider merely creates a mechanical set of rules for security selection, security weighting, and trading, and publishes their back-tested results. For example, an index may be made up of only dividend paying stocks with those stocks being weighted by dividend yield. Or, an index could include companies located west of the Mississippi that have female CEOs under the age of 50. Such an index doesn’t exist, but it would if a fund company thought they could sell an index fund or ETF to enough people based on that index. Buy the Benchmarks Benchmarks are the only type of index that passive investors should care about because they represent market returns and all subsections of a market.


Deep Value by Tobias E. Carlisle

activist fund / activist shareholder / activist investor, Andrei Shleifer, availability heuristic, backtesting, business cycle, buy and hold, corporate governance, corporate raider, creative destruction, Daniel Kahneman / Amos Tversky, discounted cash flows, fixed income, intangible asset, joint-stock company, margin call, passive investing, principal–agent problem, Richard Thaler, riskless arbitrage, Robert Shiller, Robert Shiller, Rory Sutherland, shareholder value, Sharpe ratio, South Sea Bubble, statistical model, The Myth of the Rational Market, The Wealth of Nations by Adam Smith, Tim Cook: Apple

First and foremost, I’d like to thank my wife, Nickole, who took over the primary parental responsibilities for our newborn, Auristella, whose arrival marked the midpoint of the preparation of the first draft. I’d like to thank the early reviewers of that primordial first draft: Scott Reardon, Taylor Conant, Travis Dirks, PhD, Peter Love, Toby Shute, and my mother and father, Drs. Wendy and Roger Carlisle. I’d like to thank Jeffrey Oxman, PhD for his assistance with backtesting the various strategies discussed in the book. Finally, I appreciate the assistance of the team at Wiley Finance, most especially Bill Falloon, Lia Ottaviano, Angela Urquhart, Tiffany Charbonier, and Meg Freeborn, who provided guidance and advice along the way. xv About the Author Tobias Carlisle is the founder and managing director of Eyquem Investment Management LLC, and serves as portfolio manager of Eyquem Fund LP.


pages: 317 words: 106,130

The New Science of Asset Allocation: Risk Management in a Multi-Asset World by Thomas Schneeweis, Garry B. Crowder, Hossein Kazemi

asset allocation, backtesting, Bernie Madoff, Black Swan, business cycle, buy and hold, capital asset pricing model, collateralized debt obligation, commodity trading advisor, correlation coefficient, Credit Default Swap, credit default swaps / collateralized debt obligations, diversification, diversified portfolio, fixed income, high net worth, implied volatility, index fund, interest rate swap, invisible hand, market microstructure, merger arbitrage, moral hazard, Myron Scholes, passive investing, Richard Feynman, Richard Feynman: Challenger O-ring, risk tolerance, risk-adjusted returns, risk/return, selection bias, Sharpe ratio, short selling, statistical model, stocks for the long run, survivorship bias, systematic trading, technology bubble, the market place, Thomas Kuhn: the structure of scientific revolutions, transaction costs, value at risk, yield curve, zero-sum game

It is rather like the story of the individual asking for help in finding his watch only to Other Liquidity risk Regulatory risk Concentration risk Leverage risk Operational risk Key person Reputational risk Counterparty risk Transparency risk Model risk Complexity risk Derivatives risk Credit risk Market risk Written Due Diligence Valuation Policies Stress Testing Performance Measures Written Policies and Guidelines Acknowledged Fiduciaries Adequate Systems and Procedures Risk Limits Model Review Backtesting Independent Risk Oversight Backup and Disaster Recovery Education and Knowledge Clear Organization Structure Compliance monitoring EXHIBIT 2.1 Array of Risk Determinants 23 24 THE NEW SCIENCE OF ASSET ALLOCATION be asked: Where did you lose it? When responding that he lost it across the street but is looking under the lamp because the light is better here directly illustrates the point.


Work Less, Live More: The Way to Semi-Retirement by Robert Clyatt

asset allocation, backtesting, buy and hold, delayed gratification, diversification, diversified portfolio, employer provided health coverage, estate planning, Eugene Fama: efficient market hypothesis, financial independence, fixed income, future of work, index arbitrage, index fund, lateral thinking, Mahatma Gandhi, McMansion, merger arbitrage, money market fund, mortgage tax deduction, passive income, rising living standards, risk/return, Silicon Valley, Thorstein Veblen, transaction costs, unpaid internship, upwardly mobile, Vanguard fund, working poor, zero-sum game

Using data going back to the 1870s for 50-year payout periods, Greaney finds optimal stock/bond allocation at around 80% stocks, with a resulting 3.35% Safe Withdrawal Rate, inflation-adjusted annually. Studies results are primarily focused around establishing a 100% safe rate. The study affirms approximately 90% safety around the 4% withdrawal rate level. 352 | Work Less, Live More Guyton. Jonathan T. Guyton develops and back-tests various additional withdrawal rules that would have helped permit higher withdrawal rates while maintaining portfolio safety over time. Specifically, he develops rules for limiting inflation adjustments following “down” years and limiting any inflation adjustments to 6%. Problems would arise with these rules, however, since there is no “catch-up” in future years—a down year or inflation cap permanently lowered the real value of withdrawals for the lifetime of the semi-retiree.


pages: 416 words: 118,592

A Random Walk Down Wall Street: The Time-Tested Strategy for Successful Investing by Burton G. Malkiel

accounting loophole / creative accounting, Albert Einstein, asset allocation, asset-backed security, backtesting, beat the dealer, Bernie Madoff, BRICs, butter production in bangladesh, buy and hold, capital asset pricing model, compound rate of return, correlation coefficient, Credit Default Swap, Daniel Kahneman / Amos Tversky, diversification, diversified portfolio, dogs of the Dow, Edward Thorp, Elliott wave, Eugene Fama: efficient market hypothesis, experimental subject, feminist movement, financial innovation, fixed income, framing effect, hindsight bias, Home mortgage interest deduction, index fund, invisible hand, Isaac Newton, Long Term Capital Management, loss aversion, margin call, market bubble, money market fund, mortgage tax deduction, new economy, Own Your Own Home, passive investing, Paul Samuelson, pets.com, Ponzi scheme, price stability, profit maximization, publish or perish, purchasing power parity, RAND corporation, random walk, Richard Thaler, risk tolerance, risk-adjusted returns, risk/return, Robert Shiller, Robert Shiller, short selling, Silicon Valley, South Sea Bubble, stocks for the long run, survivorship bias, The Myth of the Rational Market, the rule of 72, The Wisdom of Crowds, transaction costs, Vanguard fund, zero-coupon bond

If there’s nothing investors can exploit in a systematic way, time in and time out, then it’s very hard to say that information is not being properly incorporated into stock prices…. Real money investment strategies don’t produce the results that academic papers say they should. Roll’s final point was underscored for me during an exchange I had with a portfolio manager who used the most modern quantitative methods to run his portfolio on the basis of all the statistical work done by academics and practitioners. He “back-tested” his technique with historical data over a twenty-year period and found that it outperformed the Standard & Poor’s 500-Stock Index by three percentage points per year. But when he started using those quantitative methods with real money, his results were quite different. Over the next twenty-year period, he barely managed to equal the S&P return after expenses. This was an extraordinary performance and ranked him in the top 10 percent of all money managers.


pages: 289 words: 113,211

A Demon of Our Own Design: Markets, Hedge Funds, and the Perils of Financial Innovation by Richard Bookstaber

"Robert Solow", affirmative action, Albert Einstein, asset allocation, backtesting, beat the dealer, Black Swan, Black-Scholes formula, Bonfire of the Vanities, butterfly effect, commoditize, commodity trading advisor, computer age, computerized trading, disintermediation, diversification, double entry bookkeeping, Edward Lorenz: Chaos theory, Edward Thorp, family office, financial innovation, fixed income, frictionless, frictionless market, George Akerlof, implied volatility, index arbitrage, intangible asset, Jeff Bezos, John Meriwether, London Interbank Offered Rate, Long Term Capital Management, loose coupling, margin call, market bubble, market design, merger arbitrage, Mexican peso crisis / tequila crisis, moral hazard, Myron Scholes, new economy, Nick Leeson, oil shock, Paul Samuelson, Pierre-Simon Laplace, quantitative trading / quantitative finance, random walk, Renaissance Technologies, risk tolerance, risk/return, Robert Shiller, Robert Shiller, rolodex, Saturday Night Live, selection bias, shareholder value, short selling, Silicon Valley, statistical arbitrage, The Market for Lemons, time value of money, too big to fail, transaction costs, tulip mania, uranium enrichment, William Langewiesche, yield curve, zero-coupon bond, zero-sum game

In mid-2002 the performance of stat arb strategies began to wane, and the standard methods have not recovered. This is not surprising, given the simplicity of the strategies, the ease of entry, and the proliferation of computer power. My son David had the bad luck to get started in this sort of strategy just as the window of opportunity was closing. The strategy had performed admirably in years of back-tests and in the first months of operation, but then sputtered along doing next to nothing. He closed it down from active trading after six months and then ran it on paper for another year, with no better results. The stat arb concept remains, but in place of the stat arb strategies of the late 1980s and the 1990s is an incarnation called high frequency trading. It performs the same liquidity function, but by monitoring aberrations in supply and demand based on real-time information.


pages: 425 words: 122,223

Capital Ideas: The Improbable Origins of Modern Wall Street by Peter L. Bernstein

"Robert Solow", Albert Einstein, asset allocation, backtesting, Benoit Mandelbrot, Black-Scholes formula, Bonfire of the Vanities, Brownian motion, business cycle, buy and hold, buy low sell high, capital asset pricing model, corporate raider, debt deflation, diversified portfolio, Eugene Fama: efficient market hypothesis, financial innovation, financial intermediation, fixed income, full employment, implied volatility, index arbitrage, index fund, interest rate swap, invisible hand, John von Neumann, Joseph Schumpeter, Kenneth Arrow, law of one price, linear programming, Louis Bachelier, mandelbrot fractal, martingale, means of production, money market fund, Myron Scholes, new economy, New Journalism, Paul Samuelson, profit maximization, Ralph Nader, RAND corporation, random walk, Richard Thaler, risk/return, Robert Shiller, Robert Shiller, Ronald Reagan, stochastic process, Thales and the olive presses, the market place, The Predators' Ball, the scientific method, The Wealth of Nations by Adam Smith, Thorstein Veblen, transaction costs, transfer pricing, zero-coupon bond, zero-sum game

Unlike Alexander, he had no computers to help him in this tiresome analysis; the only tools he used to prepare his graphs were a hand-cranked Monroe calculator and some sharp pencils. Although Fama’s efforts to develop profitable trading rules were by no means unsuccessful, the ones he found worked only on the old data, not on the new. He did not realize it at the time, but his frustrating experience was shared by many highly motivated investors seeking ways to beat the market. All too often, backtests give every promise of success but prove disappointing when investors try to apply them in real time. The environment shifts, market responses slow down or speed up, or too many people follow the same strategy and end up competing away one another’s potential profits. Like Alfred Cowles before him, Fama grew curious about why ideas that seem good on paper produce such disappointing results when real money is riding on them.


pages: 431 words: 132,416

No One Would Listen: A True Financial Thriller by Harry Markopolos

backtesting, barriers to entry, Bernie Madoff, buy and hold, call centre, centralized clearinghouse, correlation coefficient, diversified portfolio, Edward Thorp, Emanuel Derman, Eugene Fama: efficient market hypothesis, family office, financial thriller, fixed income, forensic accounting, high net worth, index card, Long Term Capital Management, Louis Bachelier, offshore financial centre, Ponzi scheme, price mechanism, quantitative trading / quantitative finance, regulatory arbitrage, Renaissance Technologies, risk-adjusted returns, risk/return, rolodex, Sharpe ratio, statistical arbitrage, too big to fail, transaction costs, your tax dollars at work

I’d had a lot of experience running these types of option-sensitive products. It took several months of playing with numbers to fulfill those parameters. Neil, of course, was a major contributor, and I got a lot of data from various major firms. From Citigroup for example, I got the complete S&P 500 price return histories from 1926 to the day I received it. Then I began putting things in, taking things out, testing and retesting and back-testing to see how each package would perform in various market environments. I did this knowing full well that Bernie hadn’t bothered to do any of this. He just sat down and made it up. It’s considerably easier that way—and you always get the results you want! Eventually I developed a product we named the Rampart Options Statistical Arbitrage. It was a product that would do extremely well in a market environment with low to moderately high volatility.


Commodity Trading Advisors: Risk, Performance Analysis, and Selection by Greg N. Gregoriou, Vassilios Karavas, François-Serge Lhabitant, Fabrice Douglas Rouah

Asian financial crisis, asset allocation, backtesting, buy and hold, capital asset pricing model, collateralized debt obligation, commodity trading advisor, compound rate of return, constrained optimization, corporate governance, correlation coefficient, Credit Default Swap, credit default swaps / collateralized debt obligations, discrete time, distributed generation, diversification, diversified portfolio, dividend-yielding stocks, fixed income, high net worth, implied volatility, index arbitrage, index fund, interest rate swap, iterative process, linear programming, London Interbank Offered Rate, Long Term Capital Management, market fundamentalism, merger arbitrage, Mexican peso crisis / tequila crisis, p-value, Pareto efficiency, Ponzi scheme, quantitative trading / quantitative finance, random walk, risk-adjusted returns, risk/return, selection bias, Sharpe ratio, short selling, stochastic process, survivorship bias, systematic trading, technology bubble, transaction costs, value at risk, zero-sum game

Eagleeye also advises investment companies on hedging strategies, benchmark construction, index replication strategies, and risk management. He has been involved in the commodity markets since 1994. Prior to joining Premia, he developed programmed trading applications for Morgan Stanley’s Equity Division and proprietary computer models for urban economics. From 1994 to 1998 he worked in the Derivative Strategies Group of Putnam Investments where he researched, back-tested, and implemented relative-value derivatives strategies. Mr. Eagleeye holds a degree in Applied Mathematics from Yale University and an M.B.A. from the University of California at Berkeley. Andrew Green graduated in March 2004 with an MBA degree in Finance from Thunderbird, the American Graduate School of International Management. He is a former Research Assistant at the High Energy Particle Physics Lab of Colorado State University.


pages: 537 words: 144,318

The Invisible Hands: Top Hedge Fund Traders on Bubbles, Crashes, and Real Money by Steven Drobny

Albert Einstein, Asian financial crisis, asset allocation, asset-backed security, backtesting, banking crisis, Bernie Madoff, Black Swan, Bretton Woods, BRICs, British Empire, business cycle, business process, buy and hold, capital asset pricing model, capital controls, central bank independence, collateralized debt obligation, commoditize, Commodity Super-Cycle, commodity trading advisor, credit crunch, Credit Default Swap, credit default swaps / collateralized debt obligations, currency peg, debt deflation, diversification, diversified portfolio, equity premium, family office, fiat currency, fixed income, follow your passion, full employment, George Santayana, Hyman Minsky, implied volatility, index fund, inflation targeting, interest rate swap, inventory management, invisible hand, Kickstarter, London Interbank Offered Rate, Long Term Capital Management, market bubble, market fundamentalism, market microstructure, moral hazard, Myron Scholes, North Sea oil, open economy, peak oil, pension reform, Ponzi scheme, prediction markets, price discovery process, price stability, private sector deleveraging, profit motive, purchasing power parity, quantitative easing, random walk, reserve currency, risk tolerance, risk-adjusted returns, risk/return, savings glut, selection bias, Sharpe ratio, short selling, sovereign wealth fund, special drawing rights, statistical arbitrage, stochastic volatility, stocks for the long run, stocks for the long term, survivorship bias, The Great Moderation, Thomas Bayes, time value of money, too big to fail, transaction costs, unbiased observer, value at risk, Vanguard fund, yield curve, zero-sum game

We ran some simulations and discovered that even a tiny 5 percent leveraged allocation to long U.S. government fixed income would, over time, generate more absolute return, better ratios of return-to-worst-drawdown, and less significant absolute worst drawdown levels. We then conducted a simple study that adds leveraged bond positions to a portfolio of 100 percent long domestic U.S. equities. The back test results, from 1992 to 2009, show that adding 100 percent leverage to buy U.S. Treasuries increased annual yield by almost 5 percent while reducing the worst drawdown by 10 percent. Back-testing other periods, such as 1940 to 1980, yield less conclusive results; but it is clear more analytical work needs to be done in this area. It is also much too facile to say that leverage is bad in every occasion. Logically, since bonds can be repo’d at the cash rate and have a risk premium over cash, over time the cost of such insurance should actually be a positive to the fund (see box). (See Table 3.1.)


pages: 444 words: 151,136

Endless Money: The Moral Hazards of Socialism by William Baker, Addison Wiggin

Andy Kessler, asset allocation, backtesting, bank run, banking crisis, Berlin Wall, Bernie Madoff, Black Swan, Branko Milanovic, break the buck, Bretton Woods, BRICs, business climate, business cycle, capital asset pricing model, commoditize, corporate governance, correlation does not imply causation, credit crunch, Credit Default Swap, crony capitalism, cuban missile crisis, currency manipulation / currency intervention, debt deflation, Elliott wave, en.wikipedia.org, Fall of the Berlin Wall, feminist movement, fiat currency, fixed income, floating exchange rates, Fractional reserve banking, full employment, German hyperinflation, housing crisis, income inequality, index fund, inflation targeting, Joseph Schumpeter, Kickstarter, laissez-faire capitalism, land reform, liquidity trap, Long Term Capital Management, McMansion, mega-rich, money market fund, moral hazard, mortgage tax deduction, naked short selling, negative equity, offshore financial centre, Ponzi scheme, price stability, pushing on a string, quantitative easing, RAND corporation, rent control, reserve currency, riskless arbitrage, Ronald Reagan, school vouchers, seigniorage, short selling, Silicon Valley, six sigma, statistical arbitrage, statistical model, Steve Jobs, stocks for the long run, The Great Moderation, the scientific method, time value of money, too big to fail, upwardly mobile, War on Poverty, Yogi Berra, young professional

In 1996, near the end of the best long-term equity return period of several lifetimes spanned by the Ibbotson data, appeared one Jeremy Siegel with his “definitive guide to high-return, low-risk equities,” a book for the masses titled Stocks for the Long Run. Siegel has escaped from the world of academia into the lucrative world of Wall Street through establishing WisdomTree, a provider of ETFs and mutual funds. WisdomTree tweaks the major indicies to squeeze out a slightly better return with less volatility—all based upon statistical analysis thoughtfully proven through roughly 40 years of backtesting. The strategy is to exploit a structural flaw that requires index funds to buy more of stocks that go up and sell as underperformers go down;—instead it does the opposite by slightly overweighting holdings of high dividend yielding or low P/E stocks. While the approach appears to be successful and probably improves upon the returns of individuals plunging their IRAs into hot tips heard at the country club tap room, demand for this product may be indicative of the public’s unwavering faith in equities and bonds, and buying-on-dips right up until the end.


Investment: A History by Norton Reamer, Jesse Downing

activist fund / activist shareholder / activist investor, Albert Einstein, algorithmic trading, asset allocation, backtesting, banking crisis, Berlin Wall, Bernie Madoff, break the buck, Brownian motion, business cycle, buttonwood tree, buy and hold, California gold rush, capital asset pricing model, Carmen Reinhart, carried interest, colonial rule, credit crunch, Credit Default Swap, Daniel Kahneman / Amos Tversky, debt deflation, discounted cash flows, diversified portfolio, dogs of the Dow, equity premium, estate planning, Eugene Fama: efficient market hypothesis, Fall of the Berlin Wall, family office, Fellow of the Royal Society, financial innovation, fixed income, Gordon Gekko, Henri Poincaré, high net worth, index fund, information asymmetry, interest rate swap, invention of the telegraph, James Hargreaves, James Watt: steam engine, joint-stock company, Kenneth Rogoff, labor-force participation, land tenure, London Interbank Offered Rate, Long Term Capital Management, loss aversion, Louis Bachelier, margin call, means of production, Menlo Park, merger arbitrage, money market fund, moral hazard, mortgage debt, Myron Scholes, negative equity, Network effects, new economy, Nick Leeson, Own Your Own Home, Paul Samuelson, pension reform, Ponzi scheme, price mechanism, principal–agent problem, profit maximization, quantitative easing, RAND corporation, random walk, Renaissance Technologies, Richard Thaler, risk tolerance, risk-adjusted returns, risk/return, Robert Shiller, Robert Shiller, Sand Hill Road, Sharpe ratio, short selling, Silicon Valley, South Sea Bubble, sovereign wealth fund, spinning jenny, statistical arbitrage, survivorship bias, technology bubble, The Wealth of Nations by Adam Smith, time value of money, too big to fail, transaction costs, underbanked, Vanguard fund, working poor, yield curve

The computer gave financial practitioners access to a wealth of information and data that was previously rather intractable to synthesize and without which it was virtually impossible to test rigorous models. Further, the very notion 264 Investment: A History of a quantitative fund—a “quant” fund—or a quantitative strategy is simply inconceivable without the computer. Without the aid of the computer, one could not construct and back-test robust models or even generate signals where certain criteria were met. The Hedge Fund Universe Today As of 2014, the hedge fund industry had approximately $2.5 trillion in assets under management. Additionally, approximately $455 billion was in funds of hedge funds, a diversified investment vehicle designed to add value by selecting and overseeing other hedge fund managers that generate alpha.20 Before discussing these funds of funds in more detail, let us consider the various strategies of individual funds.


Trade Your Way to Financial Freedom by van K. Tharp

asset allocation, backtesting, Bretton Woods, buy and hold, capital asset pricing model, commodity trading advisor, compound rate of return, computer age, distributed generation, diversification, dogs of the Dow, Elliott wave, high net worth, index fund, locking in a profit, margin call, market fundamentalism, passive income, prediction markets, price stability, random walk, reserve currency, risk tolerance, Ronald Reagan, Sharpe ratio, short selling, transaction costs

You sell after a move up and buy after a move down. This is very difficult for many trend followers. There are some stocks and commodities that do not trend very well and make poor trend-following candidates. Likewise, there are those that have ranges that are too tight for band trading or do not trade well in ranges (they frequently extend far past bands, for example). These can be identified only through experience and/or backtesting. Editor’s Comments Band trading typically gives you lots of trading opportunities, and it is excellent for short-term traders. Thus, if you like (1) lots of trading activity, (2) selling highs, and (3) buying lows, then some form of band trading might be right for you. If you look at the charts, you’ll see many examples that work very well and many examples that do not work at all. Your job as a band trader would be to (1) maximize the good trades and (2) minimize the losing trades by either filtering them out or reducing their impact through your exits.


pages: 566 words: 163,322

The Rise and Fall of Nations: Forces of Change in the Post-Crisis World by Ruchir Sharma

Asian financial crisis, backtesting, bank run, banking crisis, Berlin Wall, Bernie Sanders, BRICs, business climate, business cycle, business process, call centre, capital controls, Capital in the Twenty-First Century by Thomas Piketty, Carmen Reinhart, central bank independence, centre right, colonial rule, Commodity Super-Cycle, corporate governance, creative destruction, crony capitalism, currency peg, dark matter, debt deflation, deglobalization, deindustrialization, demographic dividend, demographic transition, Deng Xiaoping, Doha Development Round, Donald Trump, Edward Glaeser, Elon Musk, eurozone crisis, failed state, Fall of the Berlin Wall, falling living standards, Francis Fukuyama: the end of history, Freestyle chess, Gini coefficient, hiring and firing, income inequality, indoor plumbing, industrial robot, inflation targeting, Internet of things, Jeff Bezos, job automation, John Markoff, Joseph Schumpeter, Kenneth Rogoff, Kickstarter, knowledge economy, labor-force participation, lateral thinking, liberal capitalism, Malacca Straits, Mark Zuckerberg, market bubble, mass immigration, megacity, Mexican peso crisis / tequila crisis, mittelstand, moral hazard, New Economic Geography, North Sea oil, oil rush, oil shale / tar sands, oil shock, pattern recognition, Paul Samuelson, Peter Thiel, pets.com, plutocrats, Plutocrats, Ponzi scheme, price stability, Productivity paradox, purchasing power parity, quantitative easing, Ralph Waldo Emerson, random walk, rent-seeking, reserve currency, Ronald Coase, Ronald Reagan, savings glut, secular stagnation, Shenzhen was a fishing village, Silicon Valley, Silicon Valley startup, Simon Kuznets, smart cities, Snapchat, South China Sea, sovereign wealth fund, special economic zone, spectrum auction, Steve Jobs, The Future of Employment, The Wisdom of Crowds, Thomas Malthus, total factor productivity, trade liberalization, trade route, tulip mania, Tyler Cowen: Great Stagnation, unorthodox policies, Washington Consensus, WikiLeaks, women in the workforce, working-age population

The Practical Art These rules emerged from my twenty-five years on the road, trying to understand the forces of change both in theory and in the real world. The reason I developed rules at all was to focus my eyes and those of my team on what matters. When we visit a country, we gather impressions, storylines, facts, and data. While insight is embedded in all observations, we have to know which ones have a reliable history of telling us something about a nation’s future. The rules systematize our thoughts and have been back-tested to determine what has worked and what has not. Eliminating the inessential helps steer the conversation to what is relevant in evaluating whether a country is on the rise or in decline. I have narrowed the voluminous lists of growth factors to a number that is large enough to keep the most significant forces of change on our radar but small enough to be manageable. In theory, growth in an economy can be broken down in a number of ways, but some methods are more useful than others.


pages: 733 words: 179,391

Adaptive Markets: Financial Evolution at the Speed of Thought by Andrew W. Lo

"Robert Solow", Albert Einstein, Alfred Russel Wallace, algorithmic trading, Andrei Shleifer, Arthur Eddington, Asian financial crisis, asset allocation, asset-backed security, backtesting, bank run, barriers to entry, Berlin Wall, Bernie Madoff, bitcoin, Bonfire of the Vanities, bonus culture, break the buck, Brownian motion, business cycle, business process, butterfly effect, buy and hold, capital asset pricing model, Captain Sullenberger Hudson, Carmen Reinhart, collapse of Lehman Brothers, collateralized debt obligation, commoditize, computerized trading, corporate governance, creative destruction, Credit Default Swap, credit default swaps / collateralized debt obligations, cryptocurrency, Daniel Kahneman / Amos Tversky, delayed gratification, Diane Coyle, diversification, diversified portfolio, double helix, easy for humans, difficult for computers, Ernest Rutherford, Eugene Fama: efficient market hypothesis, experimental economics, experimental subject, Fall of the Berlin Wall, financial deregulation, financial innovation, financial intermediation, fixed income, Flash crash, Fractional reserve banking, framing effect, Gordon Gekko, greed is good, Hans Rosling, Henri Poincaré, high net worth, housing crisis, incomplete markets, index fund, interest rate derivative, invention of the telegraph, Isaac Newton, James Watt: steam engine, job satisfaction, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Meriwether, Joseph Schumpeter, Kenneth Rogoff, London Interbank Offered Rate, Long Term Capital Management, longitudinal study, loss aversion, Louis Pasteur, mandelbrot fractal, margin call, Mark Zuckerberg, market fundamentalism, martingale, merger arbitrage, meta analysis, meta-analysis, Milgram experiment, money market fund, moral hazard, Myron Scholes, Nick Leeson, old-boy network, out of africa, p-value, paper trading, passive investing, Paul Lévy, Paul Samuelson, Ponzi scheme, predatory finance, prediction markets, price discovery process, profit maximization, profit motive, quantitative hedge fund, quantitative trading / quantitative finance, RAND corporation, random walk, randomized controlled trial, Renaissance Technologies, Richard Feynman, Richard Feynman: Challenger O-ring, risk tolerance, Robert Shiller, Robert Shiller, Sam Peltzman, Shai Danziger, short selling, sovereign wealth fund, Stanford marshmallow experiment, Stanford prison experiment, statistical arbitrage, Steven Pinker, stochastic process, stocks for the long run, survivorship bias, Thales and the olive presses, The Great Moderation, the scientific method, The Wealth of Nations by Adam Smith, The Wisdom of Crowds, theory of mind, Thomas Malthus, Thorstein Veblen, Tobin tax, too big to fail, transaction costs, Triangle Shirtwaist Factory, ultimatum game, Upton Sinclair, US Airways Flight 1549, Walter Mischel, Watson beat the top human players on Jeopardy!, WikiLeaks, Yogi Berra, zero-sum game

For statarb and other quantitative equity hedge funds, the second week of August was absolutely terrifying, whereas other types of hedge funds and portfolios cruised through the month, hardly noticing. Amir Khandani had just come back to MIT from a summer internship and was looking for a thesis topic. I suggested that we try to figure out what happened during the Quant Meltdown by simulating a simple quantitative equity trading strategy.26 A common practice in the investment business is to evaluate a particular strategy by performing a “backtest,” or “paper trading,” where you use historical prices to calculate the realized profits and losses of trades that the strategy would have called for. For example, suppose a superstitious friend tells you that you should never buy stocks on Friday the Thirteenth—is that good advice? One way to evaluate this advice is to compute the average return of the S&P 500 index between Fridays and Mondays for all Fridays that fall on the Thirteenth, and then do the same for all non-Thirteenth Fridays and Mondays, and compare the two averages.


pages: 1,042 words: 266,547

Security Analysis by Benjamin Graham, David Dodd

activist fund / activist shareholder / activist investor, asset-backed security, backtesting, barriers to entry, business cycle, buy and hold, capital asset pricing model, carried interest, collateralized debt obligation, collective bargaining, corporate governance, corporate raider, credit crunch, Credit Default Swap, credit default swaps / collateralized debt obligations, diversification, diversified portfolio, fear of failure, financial innovation, fixed income, full employment, index fund, intangible asset, invisible hand, Joseph Schumpeter, locking in a profit, Long Term Capital Management, low cost airline, low cost carrier, moral hazard, mortgage debt, Myron Scholes, Right to Buy, risk-adjusted returns, risk/return, secular stagnation, shareholder value, The Chicago School, the market place, the scientific method, The Wealth of Nations by Adam Smith, transaction costs, zero-coupon bond

(pp. 240–241) Today, of course, the activities of creditors’ committees, which play a major role in reorganizations, are closely supervised. 15 Neporent’s testimony is available at judiciary.senate.gov. 16 Bill Miller, “Good Times Are Coming!” Time, March 8, 2005. 17 Kenneth L. Fisher, 100 Minds That Made the Market (New York: Wiley, 2007), p. 61. Fisher goes on to observe of this late-in-life conversion: “Ironically, Graham’s adoption of ‘the efficient market’ was just before computer backtests would poke all kind of holes in that theory.” 18 Kenneth Lee, Trouncing the Dow: A Value-Based Method for Making Huge Profits (New York: McGraw-Hill, 1998), pp. 1–2. 19 In Berkshire Hathaway’s 2000 annual report, Buffett said of his experience in Graham’s class that “a few hours at the feet of the master proved far more valuable to me than had ten years of supposedly original thinking.” 20 Hamlet, Act III, Scene 2. 1 In the 1934 edition we had here a section on investment-quality senior issues obtainable at bargain levels.