discrete time

54 results back to index


Mathematical Finance: Core Theory, Problems and Statistical Algorithms by Nikolai Dokuchaev

Black-Scholes formula, Brownian motion, buy and hold, buy low sell high, discrete time, fixed income, implied volatility, incomplete markets, martingale, random walk, risk free rate, short selling, stochastic process, stochastic volatility, transaction costs, volatility smile, Wiener process, zero-coupon bond

For a model that takes into account the impact of a large investor’s behaviour, (ρm, Sm) is affected by {γk}k<m. 3.14 Conclusions • A discrete time market model is the most generic one, and it covers any market with time series of prices. Strategies developed for this model can be implemented directly. The discrete time model does not require the theory of stochastic integrals. • Unfortunately, discrete time models are difficult for theoretical investigations, and their role in mathematical finance is limited. A discrete time market model is complete only for the very special case of a two-point distribution (for the Cox-Ross-Rubinstein model and for a model from Remark 3.39).

It can be seen that the randomness is presented only in initial time t=0 for the process from the last example, and the evolution of this process is uniquely defined by its initial data. The following definitions give examples that are different. Definition 2.5 Let ξt, t=0, 1, 2,…, be a discrete time random process such that ξt are mutually independent and have the same distribution, and Eξt≡0. Then the process ξt is said to be a discrete time white noise. Definition 2.6 Let ξt be a discrete time white noise, and let t=0, 1, 2,…. Then the process ηt is said to be a random walk. The theory of stochastic processes studies their pathwise properties (or properties of trajectories ξ(t, ω) for given ω, as well as the evolution of the probability distributions

With some standard techniques from quadratic optimization, its solution can be used for practically interesting problems with constraints such as EX1→max, VarX1≤const., or VarX1→min, EX1≥const. Remark 3.59 The solution of the optimal investment problem for a discrete time market with T>1 is much more difficult. For instance, Markovitz’s results for quadratic U were extended for the case of T>1 only recently (Li and Ng, 2000). © 2007 Nikolai Dokuchaev Discrete Time Market Models 45 3.13 Possible generalizations The discrete time market model allows some other variants, some of which are described below. • One can consider an additive model for the stock price, when St=S0+ξ1+ …+ξt. This approach leads to the very similar theory.


pages: 320 words: 33,385

Market Risk Analysis, Quantitative Methods in Finance by Carol Alexander

asset allocation, backtesting, barriers to entry, Brownian motion, capital asset pricing model, constrained optimization, credit crunch, Credit Default Swap, discounted cash flows, discrete time, diversification, diversified portfolio, en.wikipedia.org, fixed income, implied volatility, interest rate swap, market friction, market microstructure, p-value, performance metric, quantitative trading / quantitative finance, random walk, risk free rate, risk tolerance, risk-adjusted returns, risk/return, Sharpe ratio, statistical arbitrage, statistical model, stochastic process, stochastic volatility, Thomas Bayes, transaction costs, two and twenty, value at risk, volatility smile, Wiener process, yield curve, zero-sum game

Arithmetic Brownian motion is the continuous time version of a random walk. To see this, we first note that the discrete time equivalent of the Brownian increment dZt is a standard normal independent process: i.e. as we move from continuous to discrete time, dZ t → Zt ∼ NID0 1 Probability and Statistics 137 Also the increment dX t in continuous time becomes a first difference Xt = Xt − Xt−1 in discrete time.56 Using these discrete time equivalents gives the following discretization of (I.3.141): Xt = + $t where $t = Zt so $t ∼ NID 0 2 But this is the same as Xt = + Xt−1 + $t , which is the discrete time random walk model (I.3.140). So (I.3.141) is non-stationary.

The change in the log price is the log return, so using the standard discrete time notation Pt for a price at time t we have d ln St → ln Pt Hence the discrete time equivalent of (I.3.145) is ln Pt = + $t where = prices, i.e. − 1 2 2 $t ∼ NID 0 2 (I.3.146) . This is equivalent to a discrete time random walk model for the log ln Pt = + ln Pt−1 + $t $t ∼ NID 0 2 (I.3.147) To summarize, the assumption of geometric Brownian motion for prices in continuous time is equivalent to the assumption of a random walk for log prices in discrete time. I.3.7.4 Jumps and the Poisson Process A Poisson process, introduced in Section I.3.3.2, is a stochastic process governing the occurrences of events through time.

Then, using ˆ in place of we have ˆ (I.3.135) estse X = √ n and ˆ2 (I.3.136) estse ˆ 2 = √ 2n I.3.7 STOCHASTIC PROCESSES IN DISCRETE AND CONTINUOUS TIME A stochastic process is a sequence of identically distributed random variables. For most of our purposes random variables are continuous, indeed they are often assumed to be normal, but the sequence may be over continuous or discrete time. That is, we consider continuous state processes in both continuous and discrete time. • The study of discrete time stochastic processes is called time series analysis. In the time domain the simplest time series models are based on regression analysis, which is introduced in the next chapter. A simple example of a time series model is the first order autoregression, and this is defined below along with a basic test for stationarity.


Mathematics for Finance: An Introduction to Financial Engineering by Marek Capinski, Tomasz Zastawniak

Black-Scholes formula, Brownian motion, capital asset pricing model, cellular automata, delta neutral, discounted cash flows, discrete time, diversified portfolio, fixed income, interest rate derivative, interest rate swap, locking in a profit, London Interbank Offered Rate, margin call, martingale, quantitative trading / quantitative finance, random walk, risk free rate, short selling, stochastic process, time value of money, transaction costs, value at risk, Wiener process, zero-coupon bond

With a larger number of stocks comprising the index the transaction costs would have been too high to make such a construction practicable. This page intentionally left blank 7 Options: General Properties In Chapters 1 and 4 we have seen simple examples of call and put options in a one-step discrete-time setting. Here we shall establish some fundamental properties of options, looking at them from a wider perspective and using continuous time. Nevertheless, many conclusions will also be valid in discrete time. Chapter 8 will be devoted to pricing and hedging options. 7.1 Definitions A European call option is a contract giving the holder the right to buy an asset, called the underlying, for a price X fixed in advance, known as the exercise price or strike price, at a specified future time T , called the exercise or expiry time.

In place of this, we shall exploit an analogy with the discrete time case. As a starting point we take the continuous time model of stock prices developed in Chapter 3 as a limit of suitably scaled binomial models with time steps going to zero. In the resulting continuous time model the stock price is given by (8.5) S(t) = S(0)emt+σW (t) , where W (t) is the standard Wiener process (Brownian motion), see Section 3.3.2. This means, in particular, that S(t) has the log normal distribution. Consider a European option on the stock expiring at time T with payoff f (S(T )). As in the discrete-time case, see Theorem 8.4, the time 0 price D(0) of the option ought to be equal to the expectation of the discounted payoff e−rT f (S(T )), (8.6) D(0) = E∗ e−rT f (S(T )) , under a risk-neutral probability P∗ that turns the discounted stock price process e−rt S(t) into a martingale.

As in the discrete-time case, see Theorem 8.4, the time 0 price D(0) of the option ought to be equal to the expectation of the discounted payoff e−rT f (S(T )), (8.6) D(0) = E∗ e−rT f (S(T )) , under a risk-neutral probability P∗ that turns the discounted stock price process e−rt S(t) into a martingale. Here we shall accept this formula without proof, by analogy with the discrete time result. (The proof is based on an arbitrage argument combined with a bit of Stochastic Calculus, the latter beyond the scope of this book.) What is the risk-neutral probability P∗ , then? A necessary condition is that the expectation of the discounted stock prices e−rt S(t) should be constant (independent of t), just like in the discrete time case, see (3.5). Let us compute this expectation using the real market probability P . Since W (t) is normally distributed with mean 0 and variance t, it has density x2 √ 1 e− 2t 2πt under probability P .


Mathematical Finance: Theory, Modeling, Implementation by Christian Fries

Black-Scholes formula, Brownian motion, continuous integration, discrete time, fixed income, implied volatility, interest rate derivative, martingale, quantitative trading / quantitative finance, random walk, short selling, Steve Jobs, stochastic process, stochastic volatility, volatility smile, Wiener process, zero-coupon bond

To motivate the class of Itô processes we consider the Brownian motion at discrete times 0 = T 0 < T 1 < . . . < T N . The random variable W(T i ) (position of the particle) may be expressed through the increments ∆W(T i ) := W(T i+1 ) − W(T i ): W(T i ) = i−1 X ∆W(T j ). j=0 Using the increments ∆W(T j ) we may define a whole family of discrete stochastic processes. We give an step by step introduction and use the illustrative interpretation of a particle movement: First we assume that the particle may lose energy over time 11 In a (one dimensional) random walk a particle changes position at discrete time steps by a (constant) distance (say 1) in either direction with equal probability.

.: Greeks im Black-Scholes Modell. 108 This work is licensed under a Creative Commons License. http://creativecommons.org/licenses/by-nc-nd/2.5/deed.en (german version) Comments welcome. ©2004, 2005, 2006 Christian Fries Version 1.3.19 [build 20061210]- 13th December 2006 http://www.christian-fries.de/finmath/ 7.4. HEDGING IN DISCRETE TIME: DELTA- AND DELTA-GAMMA-HEDGING 7.4. Hedging in Discrete Time: Delta- and Delta-Gamma-Hedging Wird ein Delta-Hedge kontinuierlich angewendet, so liefert er eine exakte Replikation. Das Portfolio ist gegenüber infinitesimalen Änderungen der Underlyings S i neutral. Wird der Delta-Hedge hingegen nur an diskreten Zeitpunkten T i vorgenommen, d.h. wird der Portfolio-Prozess (φ0 , φ1 , . . . , φn ) auf Zeitintervallen [T i , T i+1 ) konstant gewählt, so ist die Replikation nicht exakt.6 Während im kontinuierlichen dV(t) = = n n X ∂V(t) 1 X ∂2 V(t) ∂V(t) dt + dS i + dS i dS j , ∂t ∂S i 2 i, j=0 ∂S i ∂S j i=0 n X ∂V(t) i=0 ∂S i dS i + n ∂V(t) 1 X ∂2 V(t) dt + dS i dS j , ∂t 2 i, j=0 ∂S i ∂S j | {z } =0 nach Itôs Lemma (und (7.4)) exakt ist, gilt im zeitdiskreten Fall (∆V = V(t +∆t)−V(t)) n n X ∂V(t) ∂V(t) 1 X ∂2 V(t) ∆V(t) = ∆t + ∆S i + ∆S i ∆S j ∂t ∂S i 2 i, j=0 ∂S i ∂S j | {z } i=0 (7.10) ,(...)

Delta-Gamma Hedge gewählt und bis zur Optionsfälligkeit T nicht mehr verändert (Modell und Optionsparameter entsprechen denen in Abbildung 7.1). 118 This work is licensed under a Creative Commons License. http://creativecommons.org/licenses/by-nc-nd/2.5/deed.en (german version) Comments welcome. ©2004, 2005, 2006 Christian Fries Version 1.3.19 [build 20061210]- 13th December 2006 http://www.christian-fries.de/finmath/ 7.5. HEDGING IN DISCRETE TIME: MINIMIZING THE RESIDUAL ERROR (BOUCHAUD-SORNETTE METHOD) 7.5. Hedging in discrete Time: Minimizing the Residual Error (Bouchaud-Sornette Method) Der Delta-Hedge überträgt die optimale Handelstrategie für kontinuierliches Handeln auf den zeit-diskreten Fall, für die die Strategie jedoch nicht notwendig optimal sein muss.13 Eine adäquatere Bestimmung einer risikominimierenden Handelsstrategie liefert hingegen die direkte Betrachtung des Restrisikos: Gesucht ist jene Handelsstrategie, die das Restrisiko minimiert.


pages: 819 words: 181,185

Derivatives Markets by David Goldenberg

Black-Scholes formula, Brownian motion, capital asset pricing model, commodity trading advisor, compound rate of return, conceptual framework, correlation coefficient, Credit Default Swap, discounted cash flows, discrete time, diversification, diversified portfolio, en.wikipedia.org, financial innovation, fudge factor, implied volatility, incomplete markets, interest rate derivative, interest rate swap, law of one price, locking in a profit, London Interbank Offered Rate, Louis Bachelier, margin call, market microstructure, martingale, Myron Scholes, Norbert Wiener, Paul Samuelson, price mechanism, random walk, reserve currency, risk free rate, risk/return, riskless arbitrage, Sharpe ratio, short selling, stochastic process, stochastic volatility, time value of money, transaction costs, volatility smile, Wiener process, yield curve, zero-coupon bond, zero-sum game

INTRODUCTION TO FINANCIAL FUTURES CONTRACTS PART 2 Trading Structures Based on Forward Contracts 8. STRUCTURED PRODUCTS, INTEREST-RATE SWAPS PART 3 Options 9. INTRODUCTION TO OPTIONS MARKETS 10. OPTION TRADING STRATEGIES, PART 1 11. RATIONAL OPTION PRICING 12. OPTION TRADING STRATEGIES, PART 2 13. MODEL-BASED OPTION PRICING IN DISCRETE TIME, PART 1: THE BINOMIAL OPTION PRICING MODEL (BOPM, N=1) 14. OPTION PRICING IN DISCRETE TIME, PART 2: DYNAMIC HEDGING AND THE MULTI-PERIOD BINOMIAL OPTION PRICING MODEL, N>1 15. EQUIVALENT MARTINGALE MEASURES: A MODERN APPROACH TO OPTION PRICING 16. OPTION PRICING IN CONTINUOUS TIME 17. RISK-NEUTRAL VALUATION, EMMS, THE BOPM, AND BLACK–SCHOLES Index DETAILED CONTENTS List of figures List of tables Preface Acknowledgments PART 1 Forward Contracts and Futures Contracts CHAPTER 1 SPOT, FORWARD, AND FUTURES CONTRACTING 1.1 Three Ways to Buy and Sell Commodities 1.2 Spot Market Contracting (Motivation and Examples) 1.3 Forward Market Contracting (Motivation and Examples) 1.4 Problems with Forward Markets 1.5 Futures Contracts as a Solution to Forward Market Problems (Motivation and Examples) 1.6 Futures Market Contracting 1.7 Mapping Out Spot, Forward, and Futures Prices 1.7.1 Present and Future Spot Prices 1.7.2 Forward Prices 1.7.3 Futures Prices CHAPTER 2 HEDGING WITH FORWARD CONTRACTS 2.1 Motivation for Hedging 2.2 Payoff to a Long Forward Position 2.3 Payoff to a Short Forward Position 2.4 Hedging with Forward Contracts 2.5 Profits to a Naked (Unhedged) Long Spot Position 2.6 Profits to a Fully Hedged Current Long Spot Position 2.7 Adding Profit Tables to Determine Profits from a Fully Hedged Position 2.8 Combining Charts to See Profits from the Hedged Position CHAPTER 3 VALUATION OF FORWARD CONTRACTS ON ASSETS WITHOUT A DIVIDEND YIELD 3.1 Comparing the Payoffs from a Naked Long Spot Position to the Payoffs from a Naked Long Forward Position 3.2 Pricing Zero-Coupon, Unit Discount Bonds in Continuous Time 3.2.1 Continuous Compounding and Continuous Discounting 3.2.2 Pricing Zero-Coupon Bonds 3.3 Price vs.

11.7 Further Implications of European Put-Call Parity 11.7.1 Synthesizing Forward Contract from Puts and Calls 11.8 Financial Innovation using European Put-Call Parity 11.8.1 Generalized Forward Contracts 11.8.2 American Put-Call Parity (No Dividends) 11.9 Postscript on ROP CHAPTER 12 OPTION TRADING STRATEGIES, PART 2 12.1 Generating Synthetic Option Strategies from European Put-Call Parity 12.2 The Covered Call Hedging Strategy 12.2.1 Three Types Of Covered Call Writes 12.2.2 Economic Interpretation of the Covered Call Strategy 12.3 The Protective Put Hedging Strategy 12.3.1 Puts as Insurance 12.3.2 Economic Interpretation of the Protective Put Strategy CHAPTER 13 MODEL-BASED OPTION PRICING IN DISCRETE TIME, PART 1: THE BINOMIAL OPTION PRICING MODEL (BOPM, N=1) 13.1 The Objective of Model-Based Option Pricing (MBOP) 13.2 The Binomial Option Pricing Model, Basics 13.2.1 Modeling Time in a Discrete Time Framework 13.2.2 Modeling the Underlying Stock Price Uncertainty 13.3 The Binomial Option Pricing Model, Advanced 13.3.1 Path Structure of the Binomial Process, Total Number of Price Paths 13.3.2 Path Structure of the Binomial Process, Total Number of Price Paths Ending at a Specific Terminal Price 13.3.3 Summary of Stock Price Evolution for the N-Period Binomial Process 13.4 Option Valuation for the BOPM (N=1) 13.4.1 Step 1, Pricing the Option at Expiration 13.4.2 Step 2, Pricing the Option Currently (time t=0) 13.5 Modern Tools for Pricing Options 13.5.1 Tool 1, The Principle of No-Arbitrage 13.5.2 Tool 2, Complete Markets or Replicability, and a Rule of Thumb 13.5.3 Tool 3, Dynamic and Static Replication 13.5.4 Relationships between the Three Tools 13.6 Synthesizing a European Call Option 13.6.1 Step 1, Parameterization 13.6.2 Step 2, Defining the Hedge Ratio and the Dollar Bond Position 13.6.3 Step 3, Constructing the Replicating Portfolio 13.6.4 Step 4, Implications of Replication 13.7 Alternative Option Pricing Techniques 13.8 Appendix: Derivation of the BOPM (N=1) as a Risk-Neutral Valuation Relationship CHAPTER 14 OPTION PRICING IN DISCRETE TIME, PART 2: DYNAMIC HEDGING AND THE MULTI-PERIOD BINOMIAL OPTION PRICING MODEL, N>1 14.1 Modeling Time and Uncertainty in the BOPM, N>1 14.1.1 Stock Price Behavior, N=2 14.1.2 Option Price Behavior, N=2 14.2 Hedging a European Call Option, N=2 14.2.1 Step 1, Parameterization 14.2.2 Step 2, Defining the Hedge Ratio and the Dollar Bond Position 14.2.3 Step 3, Constructing the Replicating Portfolio 14.2.4 The Complete Hedging Program for the BOPM, N=2 14.3 Implementation of the BOPM for N=2 14.4 The BOPM, N>1 as a RNVR Formula 14.5 Multi-period BOPM, N>1: A Path Integral Approach 14.5.1 Thinking of the BOPM in Terms of Paths 14.5.2 Proof of the BOPM Model for general N CHAPTER 15 EQUIVALENT MARTINGALE MEASURES: A MODERN APPROACH TO OPTION PRICING 15.1 Primitive Arrow–Debreu Securities and Option Pricing 15.1.1 Exercise 1, Pricing B(0,1) 15.1.2 Exercise 2, Pricing ADu(ω) and ADd(ω) 15.2 Contingent Claim Pricing 15.2.1 Pricing a European Call Option 15.2.2 Pricing any Contingent Claim 15.3 Equivalent Martingale Measures (EMMs) 15.3.1 Introduction and Examples 15.3.2 Definition of a Discrete-Time Martingale 15.4 Martingales and Stock Prices 15.4.1 The Equivalent Martingale Representation of Stock Prices 15.5 The Equivalent Martingale Representation of Option Prices 15.5.1 Discounted Option Prices 15.5.2 Summary of the EMM Approach 15.6 The Efficient Market Hypothesis (EMH), A Guide To Modeling Prices 15.7 Appendix: Essential Martingale Properties CHAPTER 16 OPTION PRICING IN CONTINUOUS TIME 16.1 Arithmetic Brownian Motion (ABM) 16.2 Shifted Arithmetic Brownian Motion 16.3 Pricing European Options under Shifted Arithmetic Brownian Motion with No Drift (Bachelier) 16.3.1 Theory (FTAP1 and FTAP2) 16.3.2 Transition Density Functions 16.3.3 Deriving the Bachelier Option Pricing Formula 16.4 Defining and Pricing a Standard Numeraire 16.5 Geometric Brownian Motion (GBM) 16.5.1 GBM (Discrete Version) 16.5.2 Geometric Brownian Motion (GBM), Continuous Version 16.6 Itô’s Lemma 16.7 Black–Scholes Option Pricing 16.7.1 Reducing GBM to an ABM with Drift 16.7.2 Preliminaries on Generating Unknown Risk-Neutral Transition Density Functions from Known Ones 16.7.3 Black–Scholes Options Pricing from Bachelier 16.7.4 Volatility Estimation in the Black–Scholes Model 16.8 Non-Constant Volatility Models 16.8.1 Empirical Features of Volatility 16.8.2 Economic Reasons for why Volatility is not Constant, the Leverage Effect 16.8.3 Modeling Changing Volatility, the Deterministic Volatility Model 16.8.4 Modeling Changing Volatility, Stochastic Volatility Models 16.9 Why Black–Scholes is Still Important CHAPTER 17 RISK-NEUTRAL VALUATION, EMMS, THE BOPM, AND BLACK–SCHOLES 17.1 Introduction 17.1.1 Preliminaries on FTAP1 and FTAP2 and Navigating the Terminology 17.1.2 Pricing by Arbitrage and the FTAP2 17.1.3 Risk-Neutral Valuation without Consensus and with Consensus 17.1.4 Risk-Neutral Valuation without Consensus, Pricing Contingent Claims with Unhedgeable Risks 17.1.5 Black–Scholes’ Contribution 17.2 Formal Risk-Neutral Valuation without Replication 17.2.1 Constructing EMMs 17.2.2 Interpreting Formal Risk-Neutral Probabilities 17.3 MPRs and EMMs, Another Version of FTAP2 17.4 Complete Risk-Expected Return Analysis of the Riskless Hedge in the (BOPM, N=1) 17.4.1 Volatility of the Hedge Portfolio 17.4.2 Direct Calculation of σS 17.4.3 Direct Calculation of σC 17.4.4 Expected Return of the Hedge Portfolio 17.5 Analysis of the Relative Risks of the Hedge Portfolio’s Return 17.5.1 An Initial Look at Risk Neutrality in the Hedge Portfolio 17.5.2 Role of the Risk Premia for a Risk-Averse Investor in the Hedge Portfolio 17.6 Option Valuation 17.6.1 Some Manipulations 17.6.2 Option Valuation Done Directly by a Risk-Averse Investor 17.6.3 Option Valuation for the Risk-Neutral Investor Index FIGURES 1.1 Canada/US Foreign Exchange Rate 1.2 Intermediation by the Clearing House 1.3 Offsetting Trades 1.4 Gold Fixing Price in London Bullion Market (USD$) 2.1 Graphical Method to Get Hedged Position Profits 2.2 Payoff Per Share to a Long Forward Contract 2.3 Payoff Per Share to a Short Forward Contract 2.4 Profits per bu. for the Unhedged Position 3.1 Profits Per Share to a Naked Long Spot Position 3.2 Payoffs Per Share to a Naked Long Spot Position 3.3 Payoffs (=Profits) Per Share to a Naked Long Forward Position 3.4 Payoffs Per Share to a Naked Long Spot Position and to a Naked Long Forward Position 5.1 Order Flow Process (Pit Trading) 5.2 The Futures Clearing House 5.3 Offsetting Trades 5.4 Overall Profits for Example 2 6.1 Long vs.

That is, the covered call writer is neither a 100% hedger nor a 100% speculator. He has combined position that represents both hedging and speculating. CHAPTER 13 MODEL-BASED OPTION PRICING IN DISCRETE TIME, PART 1: THE BINOMIAL OPTION PRICING MODEL (BOPM, N=1) 13.1 The Objective of Model-Based Option Pricing (MBOP) 13.2 The Binomial Option Pricing Model, Basics 13.2.1 Modeling Time in a Discrete Time Framework 13.2.2 Modeling the Underlying Stock Price Uncertainty 13.3 The Binomial Option Pricing Model, Advanced 13.3.1 Path Structure of the Binomial Process, Total Number of Price Paths 13.3.2 Path Structure of the Binomial Process, Total Number of Price Paths Ending at a Specific Terminal Price 13.3.3 Summary of Stock Price Evolution for the N-Period Binomial Process 13.4 Option Valuation for the BOPM (N=1) 13.4.1 Step 1, Pricing the Option at Expiration 13.4.2 Step 2, Pricing the Option Currently (time t=0) 13.5 Modern Tools for Pricing Options 13.5.1 Tool 1, The Principle of No-Arbitrage 13.5.2 Tool 2, Complete Markets or Replicability and a Rule of Thumb 13.5.3 Tool 3, Dynamic and Static Replication 13.5.4 Relationships between the Three Tools 13.6 Synthesizing a European Call Option 13.6.1 Step 1, Parameterization 13.6.2 Step 2, Defining the Hedge Ratio and the Dollar Bond Position 13.6.3 Step 3, Constructing the Replicating Portfolio 13.6.4 Step 4, Implications of Replication 13.7 Alternative Option Pricing Techniques 13.8 Appendix: Derivation of the BOPM (N=1) as a Risk-Neutral Valuation Relationship In this chapter we study the simplest model-based, yet still rational (arbitrage-free) option pricing model.


pages: 447 words: 104,258

Mathematics of the Financial Markets: Financial Instruments and Derivatives Modelling, Valuation and Risk Issues by Alain Ruttiens

algorithmic trading, asset allocation, asset-backed security, backtesting, banking crisis, Black Swan, Black-Scholes formula, Brownian motion, capital asset pricing model, collateralized debt obligation, correlation coefficient, Credit Default Swap, credit default swaps / collateralized debt obligations, delta neutral, discounted cash flows, discrete time, diversification, fixed income, implied volatility, interest rate derivative, interest rate swap, margin call, market microstructure, martingale, p-value, passive investing, quantitative trading / quantitative finance, random walk, risk free rate, risk/return, Satyajit Das, Sharpe ratio, short selling, statistical model, stochastic process, stochastic volatility, time value of money, transaction costs, value at risk, volatility smile, Wiener process, yield curve, zero-coupon bond

Further on, the discount factors in continuous time become: (1.8) that is, the continuous time equivalent of Eq. 1.5 in discrete time. Coming back to the previous example of zd = 5%, t = 4 years, PV = 1, where Dt was =1/1.054 = 0.8227 in discrete time, corresponding to FV = 1.2155, we have now, with the same 5% as a zc rate: and But in fact we must take into account that if zd = 5% was a discrete rate, its corresponding continuous value is giving and that is, the same results as in discrete time. 1.4 FORWARD RATES Let's have the following set of spot rates z1, z2, …, zt, whatever the corresponding time periods t = 1, 2, …, t are (e.g., years), and define ft, t+1 the forward zero-coupon rate between time t and time t + 1.

In the case of the process, [y(t)] actually follows a « unit normal distribution », noted (0, 1), of mean E = 0 and variance V = 1 (hence, a standard deviation STD = √V = 1 as well). In discrete time, Eq. 8.1 means therefore that the change of (t) during Δt is following a Gaussian distribution with parameters E = 0 (because 0 × Δt = 0), STD = Δt (because 1 × Δt = Δt) and V = Δt. Passing from discrete to continuous time, and thus from discrete time (or “finite”) intervals Δt to infinitely short, « infinitesimal » or « instantaneous » time intervals noted dt, Eq. 8.1 becomes (8.2) called a standard Wiener process, or a Brownian process or Brownian motion.1 This process is also called (although improperly2) white noise, by analogy with the very light but permanent scratching behind a sound produced electronically.

To obtain this limit, let us use the classic algebraic formula defining the “e” number (= 2.71828…): By making x = n/z and raising each side to the power z we get and in Eq. 1.6, by making n → ∞, we get giving FV(n → ∞ = 100 e0.08 = 108.3287)…(not much more than FV(n = 365)). We therefore have the corresponding relationships for t = 1 year: where zc means the continuous (zero) rate while zd is a discrete (zero) rate. It results from the previous table that the relationship between zc and zd is: (1.6bis) Note that one also speaks of continuous time versus discrete time to refer to continuous or discrete compounding. In practice, one shall consider that z without subscript means zd, and if there is a risk of confusion one must specify zd or zc. The correspondence may be generalized on t years, and with zero-coupon rates zct and zdt respectively, as follows: (1.7) In particular, due to its very essence of implying an instantaneous compounding, it appears that the “continuous” formula no longer needs a different formulation whether t is inferior or superior of 1 year (or 0.5 year) as with the “discrete” form.


pages: 313 words: 34,042

Tools for Computational Finance by Rüdiger Seydel

bioinformatics, Black-Scholes formula, Brownian motion, commoditize, continuous integration, discrete time, implied volatility, incomplete markets, interest rate swap, linear programming, London Interbank Offered Rate, mandelbrot fractal, martingale, random walk, risk free rate, stochastic process, stochastic volatility, transaction costs, value at risk, volatility smile, Wiener process, zero-coupon bond

A Discrete Model We begin with discretizing the continuous time t, replacing t by equidistant time instances ti . Let us use the notations M : number of time steps T ∆t := M ti := i · ∆t, i = 0, ..., M Si := S(ti ) So far the domain of the (S, t) half strip is semi-discretized in that it is replaced by parallel straight lines with distance ∆t apart, leading to a discrete-time model. The next step of discretization replaces the continuous values Si along the parallel t = ti by discrete values Sji , for all i and appropriate j. (Here the indices j, i in Sji mean a matrix-like notation.) For a better understanding of the S-discretization compare Figure 1.8. This figure shows a mesh of the grid, namely the transition from t to t + ∆t, or from ti to ti+1 .

The relation (1.21b) is also known as 1.6 Stochastic Processes E((∆Wt )2 ) = ∆t . 27 (1.21c) The independence of the increments according to Definition 1.7(c) implies for tj+1 > tj the independence of Wtj and (Wtj+1 − Wtj ), but not of Wtj+1 and (Wtj+1 − Wtj ). Wiener processes are examples of martingales — there is no drift. Discrete-Time Model Let ∆t > 0 be a constant time increment. For the discrete instances tj := j∆t the value Wt can be written as a sum of increments ∆Wk , j Wj∆t = k=1 Wk∆t − W(k−1)∆t . =:∆Wk The ∆Wk are independent and because of (1.21) normally distributed with Var(∆Wk ) = ∆t. Increments ∆W with such a distribution can be calculated from standard normally distributed random numbers Z.

Hence for ∆t → 0 the normal distribution of the difference quotient disperses and no convergence can be expected. 1.6.2 Stochastic Integral Let us suppose that the price development of an asset is described by a Wiener process Wt . Let b(t) be the number of units of the asset held in a portfolio at time t. We start with the simplifying assumption that trading is only possible at discrete time instances tj , which define a partition of the interval 0 ≤ t ≤ T . Then the trading strategy b is piecewise constant, b(t) = b(tj−1 ) for tj−1 ≤ t < tj and 0 = t0 < t1 < . . . < tN = T . (1.23) Such a function b(t) is called step function. The trading gain for the subinterval tj−1 ≤ t < tj is given by b(tj−1 )(Wtj − Wtj−1 ), and N b(tj−1 )(Wtj − Wtj−1 ) j=1 (1.24) 1.6 Stochastic Processes 29 represents the trading gain over the time period 0 ≤ t ≤ T .


pages: 153 words: 12,501

Mathematics for Economics and Finance by Michael Harrison, Patrick Waldron

Brownian motion, buy low sell high, capital asset pricing model, compound rate of return, discrete time, incomplete markets, law of one price, market clearing, Myron Scholes, Pareto efficiency, risk tolerance, riskless arbitrage, short selling, stochastic process

These distinctions were suppressed in the intervening sections but are considered again in this section and in Section 5.4 respectively. The multi-period model should probably be introduced at the end of Chapter 4 but could also be left until Chapter 7. For the moment this brief introduction is duplicated in both chapters. Discrete time multi-period investment problems serve as a stepping stone from the single period case to the continuous time case. The main point to be gotten across is the derivation of interest rates from equilibrium prices: spot rates, forward rates, term structure, etc. This is covered in one of the problems, which illustrates the link between prices and interest rates in a multiperiod model.

A stochastic process is a collection of random variables or random vectors indexed by time, e.g. {x̃t : t ∈ T } or just {x̃t } if the time interval is clear from the context. For the purposes of this part of the course, we will assume that the index set consists of just a finite number of times i.e. that we are dealing with discrete time stochastic processes. Then a stochastic process whose elements are N -dimensional random vectors is equivalent to an N |T |-dimensional random vector. The (joint) c.d.f. of a random vector or stochastic process is the natural extension of the one-dimensional concept. Random variables can be discrete, continuous or mixed.

Some counterexamples of both types are probably called for here, or maybe can be left as exercises. Extracts from my PhD thesis can be used to talk about signing the first three coefficients in the Taylor expansion, and to speculate about further extensions to higher moments. 5.9 The Kelly Strategy In a multi-period, discrete time, investment framework, investors will be concerned with both growth (return) and security (risk). There will be a trade-off between the two, and investors will be concerned with finding the optimal tradeoff. This, of course, depends on preferences, but some useful benchmarks exist. There are three ways of measuring growth: 1. the expected wealth at time t Revised: December 2, 1998 104 5.10.


pages: 443 words: 51,804

Handbook of Modeling High-Frequency Data in Finance by Frederi G. Viens, Maria C. Mariani, Ionut Florescu

algorithmic trading, asset allocation, automated trading system, backtesting, Bear Stearns, Black-Scholes formula, Brownian motion, business process, buy and hold, continuous integration, corporate governance, discrete time, distributed generation, fixed income, Flash crash, housing crisis, implied volatility, incomplete markets, linear programming, mandelbrot fractal, market friction, market microstructure, martingale, Menlo Park, p-value, pattern recognition, performance metric, principal–agent problem, random walk, risk free rate, risk tolerance, risk/return, short selling, statistical model, stochastic process, stochastic volatility, transaction costs, value at risk, volatility smile, Wiener process

For example, Ding et al. (1993), De Lima and Crato (1994), and Breidt et al. (1998), among others, observed that the squared returns of market indexes have the long-memory property, which intuitively means that observations that are far apart are highly correlated. Harvey (1998) and Breidt et al. (1998) independently introduced a discrete time model under which the log-volatility is modeled as a fractional ARIMA(p,d,q) process, while at the same time Comte and Renault (1998) introduced a continuous-time long-range dependent volatility model. In this chapter, we consider the continuous-time long-memory stochastic volatility (LMSV) model by Comte and Renault (1998): If Xt are the log-returns of the price process St and Yt is the volatility process, then ⎧ ⎨ dXt ⎩ dYt σ 2 (Yt ) = μ− dt + σ (Yt ) dWt , 2 = α Yt dt + β dBtH , (8.1) where Wt is a standard Brownian motion and BtH is a fractional Brownian motion with Hurst index H ∈ (0, 1].

In the last section, we summarize our results. 222 CHAPTER 8 Estimation and calibration for LMSV 8.2 Statistical Inference Under the LMSV Model The main goal of this section is to present the most popular methods for statistical inference under the LMSV model (Eq. 8.1). The model is in continuous-time and the volatility process is not observed, but we only have access to discrete time observations of historical stock prices. However, we assume that we are able to obtain high frequency (intraday) data, for example, tick-by-tick observations. 8.2.1 LOG-PERIODOGRAM REGRESSION HURST PARAMETER ESTIMATOR A common practice in the literature is the use of the absolute or log squared returns in order to estimate the long-memory parameter semiparametrically.

The estimator used in these cases is the well-known GPH estimator that was initially introduced by Geweke and Porter-Hudak (1983) and is based on the log-periodogram regression. The asymptotic behavior of the GPH estimator in the case of Gaussian observations has been studied by Robinson (1995a,b) and Hurvich, Deo and Brodsky (1998). However, the log squared returns in the discrete time LMSV model are not Gaussian and asymptotic properties of the estimator in this case have been established by Deo and Hurvich (2001). Our model is in continuous time, thus, we are going to discretize it first before applying the log-periodogram method. This is a common approach when we deal with continuous-time models and is also suggested by Comte and Renault (1998).


Monte Carlo Simulation and Finance by Don L. McLeish

algorithmic bias, Black-Scholes formula, Brownian motion, capital asset pricing model, compound rate of return, discrete time, distributed generation, finite state, frictionless, frictionless market, implied volatility, incomplete markets, invention of the printing press, martingale, p-value, random walk, risk free rate, Sharpe ratio, short selling, stochastic process, stochastic volatility, survivorship bias, the market place, transaction costs, value at risk, Wiener process, zero-coupon bond, zero-sum game

One of the simplest methods of simulating such a process is motivated through a crude interpretation of the above equation in terms of discrete time steps, that is that a small increment Xt+h − Xt in the process is approximately normally distributed with mean given by a(Xt , t)hand variance given by σ 2 (Xt , t)h. We generate these increments sequentially, beginning with an assumed value for X0 , and then adding to obtain an approximation to the value of the process at discrete times t = 0, h, 2h, 3h, . . .. Between these discrete points, we can linearly interpolate the values. Approximating the process by assuming that the conditional distribution of Xt+h − Xt is N (a(Xt , t)h, σ 2 (Xt , t)h) is called Euler’s method by analogy to a simple method by the same name for solving ordinary differential equations.

These are the most common in operations research and examples are simulations of processes such as networks or queues. Simulation models in which the process is characterized by a state, with changes only at discrete time points are DES. In modeling an inventory system, for example, the arrival of a batch of raw materials can be considered as an event which precipitates a sudden change in the state of the system, followed by a demand some discrete time later when the state of the system changes again. A system driven by differential equations in continuous time is an example of a DES because the changes occur continuously in time.

BASIC MONTE CARLO METHODS and l > 0, Z ∞ 1 xp−1 exp{−(ln(x) − µ)2 /2σ2 }dx E[X I(X > l)] = √ σ 2π l Z ∞ 1 = √ ezp exp{−(z − µ)2 /2σ2 }dz σ 2π ln(l) pµ+p2 σ 2 /2 Z ∞ 1 exp{−(z − ξ)2 /2σ 2 }dz where ξ = µ + σ 2 p = √ e σ 2π ln(l) 2 2 ξ − ln(l) = epµ+p σ /2 Φ( ) σ 2 σ 1 (3.11) = η p exp{− p(1 − p)}Φ(σ−1 ln(η/l) + σ(p − )) 2 2 p where Φ is the standard normal cumulative distribution function. Application: A Discrete Time Black-Scholes Model Suppose that a stock price St , t = 1, 2, 3, ... is generated from an independent sequence of returns Z1 , Z2 over non-overlapping time intervals. If the value of the stock at the end of day t = 0 is S0 , and the return on day 1 is Z1 then the value of the stock at the end of day 1 is S1 = S0 eZ1 .


The Volatility Smile by Emanuel Derman,Michael B.Miller

Albert Einstein, Asian financial crisis, Benoit Mandelbrot, Brownian motion, capital asset pricing model, collateralized debt obligation, continuous integration, Credit Default Swap, credit default swaps / collateralized debt obligations, discrete time, diversified portfolio, dividend-yielding stocks, Emanuel Derman, Eugene Fama: efficient market hypothesis, fixed income, implied volatility, incomplete markets, law of one price, London Whale, mandelbrot fractal, market bubble, market friction, Myron Scholes, prediction markets, quantitative trading / quantitative finance, risk tolerance, riskless arbitrage, Sharpe ratio, statistical arbitrage, stochastic process, stochastic volatility, transaction costs, volatility arbitrage, volatility smile, Wiener process, yield curve, zero-coupon bond

(6.2) 111 The Effect of Discrete Hedging on P&L where the option C is valued and hedged at the realized volatility. If the option were to be hedged continuously, the value of the hedged portfolio would grow at the riskless rate. The hedging error accumulated over a discrete time dt owing to the mismatch between a continuous hedge ratio and discrete time step is given by HEdt = 𝜋 + d𝜋 − 𝜋erdt ≈[ d𝜋 − r𝜋dt ] [ ] 𝜕C 𝜕C 𝜕C 1 𝜕2C 2 2 2 𝜕C ≈ 𝜎 S Z dt − dt + dS + dS − rdt C − S 2 𝜕S2 𝜕S 𝜕S 𝜕S dt ( [ )] 2 𝜕C 𝜕C 1 𝜕 C 2 2 2 𝜎 S Z −r C− + ≈ S dt (6.3) 2 𝜕S2 𝜕S dt Now from the BSM equation, Chapter 5, Equation 5.12, the last term in the square brackets is given by ( ) 𝜕C 𝜕C 1 𝜕 2 C 2 2 𝜎 S + r C− S = 𝜕t 2 𝜕S2 𝜕S (6.4) Substituting into Equation 6.3, we obtain HEdt ≈ ) 1 𝜕2C 2 2 ( 2 𝜎 S Z − 1 dt 2 2 𝜕S (6.5) Because Z is a standard normal variable, we know that E[Z2 ] = 1.

At expiration, the payoff profile of this scaled portfolio will equal [( ) ( )] ) 2 ( ST − S0 ST − ln (4.24) 𝜋 ST , S0 , T, T = T S0 S0 PROOF THAT THE FAIR VALUE OF A LOG CONTRACT WITH S ∗ = S 0 IS THE REALIZED FUTURE VARIANCE How does continuously hedging a log contract actually produce a security whose value is the variance 𝜎 2 of the stock? In this section we show in discrete time, step-by-step, how hedging a log contract replicates the variance of a stock. Consider a log contract with unknown value that pays out ln(ST /S0 ) at expiration T. Let its value today be denoted by L0 . For pedagogical simplicity, we will assume that the riskless rate and the dividend yield are zero.

Understanding the Hedging Error Analytically We have demonstrated that when the hedging volatility is equal to the realized volatility, an increase in the hedging frequency results in more accurate replication of the option. Four times as much hedging led to half the replication error. We now justify this relation analytically.1 Assume that implied and realized volatility are identical. Suppose that over a discrete time step dt the price of a stock evolves according to √ dS = 𝜇dt + 𝜎Z dt S (6.1) where Z ∼ N(0, 1) is normally distributed with mean zero and standard deviation 1. The value of the instantaneously delta-hedged option portfolio is given by 𝜋 =C− 1 𝜕C S 𝜕S This section benefited from unpublished work of Michael Kamal


pages: 345 words: 86,394

Frequently Asked Questions in Quantitative Finance by Paul Wilmott

Albert Einstein, asset allocation, beat the dealer, Black-Scholes formula, Brownian motion, butterfly effect, buy and hold, capital asset pricing model, collateralized debt obligation, Credit Default Swap, credit default swaps / collateralized debt obligations, delta neutral, discrete time, diversified portfolio, Edward Thorp, Emanuel Derman, Eugene Fama: efficient market hypothesis, fixed income, fudge factor, implied volatility, incomplete markets, interest rate derivative, interest rate swap, iterative process, lateral thinking, London Interbank Offered Rate, Long Term Capital Management, Louis Bachelier, mandelbrot fractal, margin call, market bubble, martingale, Myron Scholes, Norbert Wiener, Paul Samuelson, quantitative trading / quantitative finance, random walk, regulatory arbitrage, risk free rate, risk/return, Sharpe ratio, statistical arbitrage, statistical model, stochastic process, stochastic volatility, transaction costs, urban planning, value at risk, volatility arbitrage, volatility smile, Wiener process, yield curve, zero-coupon bond

Volatility is a required input for all classical option-pricing models, it is also an input for many asset-allocation problems and risk estimation, such as Value at Risk. Therefore it is very important to have a method for forecasting future volatility. There is one slight problem with these econometric models, however. The econometrician develops his volatility models in discrete time, whereas the option-pricing quant would ideally like a continuous-time stochastic differential equation model. Fortunately, in many cases the discrete-time model can be reinterpreted as a continuous-time model (there is weak convergence as the time step gets smaller), and so both the econometrician and the quant are happy. Still, of course, the econometric models, being based on real stock price data, result in a model for the real and not the risk-neutral volatility process.

Figure 1-3: The branching structure of the binomial model. 1979- 81 Harrison, Kreps, Pliska Until these three came onto the scene quantitative finance was the domain of either economists or applied mathematicians. Mike Harrison and David Kreps, in 1979, showed the relationship between option prices and advanced probability theory, originally in discrete time. Harrison and Stan Pliska in 1981 used the same ideas but in continuous time. From that moment until the mid 1990s applied mathematicians hardly got a look in. Theorem, proof everywhere you looked. See Harrison and Kreps (1979) and Harrison and Pliska (1981). 1986 Ho and Lee One of the problems with the Vasicek framework for interest rate derivative products was that it didn’t give very good prices for bonds, the simplest of fixed income products.

To get around this problem it is possible to independently model volatility, etc., as variables themselves. In such a way it is possible to build up a consistent theory. Static hedging There are quite a few problems with delta hedging, on both the practical and the theoretical side. In practice, hedging must be done at discrete times and is costly. Sometimes one has to buy or sell a prohibitively large number of the underlying in order to follow the theory. This is a problem with barrier options and options with discontinuous payoff. On the theoretical side, the model for the underlying is not perfect, at the very least we do not know parameter values accurately.


Analysis of Financial Time Series by Ruey S. Tsay

Asian financial crisis, asset allocation, backpropagation, Bayesian statistics, Black-Scholes formula, Brownian motion, business cycle, capital asset pricing model, compound rate of return, correlation coefficient, data acquisition, discrete time, frictionless, frictionless market, implied volatility, index arbitrage, Long Term Capital Management, market microstructure, martingale, p-value, pattern recognition, random walk, risk free rate, risk tolerance, short selling, statistical model, stochastic process, stochastic volatility, telemarketer, transaction costs, value at risk, volatility smile, Wiener process, yield curve

The theory of stochastic process is the basis on which the observed prices are analyzed and statistical inference is made. There are two types of stochastic process for modeling the price of an asset. The first type is called the discrete-time stochastic process, in which the price changes at discrete time points. All the processes discussed in the previous chapters belong to this category. For example, the daily closing price of IBM stock on the New York Stock Exchange forms a discrete-time stochastic process. Here the price changes only at the closing of a trading day. Price movements within a trading day are not necessarily relevant to the observed daily price.

Price movements within a trading day are not necessarily relevant to the observed daily price. The second type of stochastic process is the continuous-time process, in which the price changes continuously, even though the price is only observed at discrete time points. One can think of the price as the “true value” of the stock that always exists and is time varying. For both types of process, the price can be continuous or discrete. A continuous price can assume any positive real number, whereas a discrete price can only assume a countable number of possible values. Assume that the price of an asset is a continuous-time stochastic process. If the price is a continuous random variable, then we have a continuous-time continuous process.

If the price is a continuous random variable, then we have a continuous-time continuous process. If the price itself is discrete, then we have a continuous-time discrete process. Similar classifications apply to discretetime processes. The series of price change in Chapter 5 is an example of discrete-time discrete process. In this chapter, we treat the price of an asset as a continuous-time continuous stochastic process. Our goal is to introduce the statistical theory and tools needed to model financial assets and to price options. We begin the chapter with some terminologies of stock options used in the chapter. In Section 6.2, we provide a brief introduction of Brownian motion, which is also known as a Wiener process.


pages: 206 words: 70,924

The Rise of the Quants: Marschak, Sharpe, Black, Scholes and Merton by Colin Read

"Robert Solow", Albert Einstein, Bayesian statistics, Bear Stearns, Black-Scholes formula, Bretton Woods, Brownian motion, business cycle, capital asset pricing model, collateralized debt obligation, correlation coefficient, Credit Default Swap, credit default swaps / collateralized debt obligations, David Ricardo: comparative advantage, discovery of penicillin, discrete time, Emanuel Derman, en.wikipedia.org, Eugene Fama: efficient market hypothesis, financial innovation, fixed income, floating exchange rates, full employment, Henri Poincaré, implied volatility, index fund, Isaac Newton, John Meriwether, John von Neumann, Joseph Schumpeter, Kenneth Arrow, Long Term Capital Management, Louis Bachelier, margin call, market clearing, martingale, means of production, moral hazard, Myron Scholes, Paul Samuelson, price stability, principal–agent problem, quantitative trading / quantitative finance, RAND corporation, random walk, risk free rate, risk tolerance, risk/return, Ronald Reagan, shareholder value, Sharpe ratio, short selling, stochastic process, Thales and the olive presses, Thales of Miletus, The Chicago School, the scientific method, too big to fail, transaction costs, tulip mania, Works Progress Administration, yield curve

He produced both a large quantity and a high average quality of papers within the MIT 142 The Rise of the Quants tradition of elegant and simple continuous-time stochastic models that often employed the representative agent approach. These models differentiated themselves from the less sophisticated discrete-time models in vogue elsewhere. Such a continuous-time approach depended crucially on solutions to differential equations, while the discrete-time analog took a far less elegant approach using algebra, some calculus, and seemingly myriad special cases. While more mathematically difficult, the continuous-time approach of the MIT School was much more elegant and powerful.

He wrote a prominent textbook, called Investment, now in its sixth edition, with Gordon Alexander and Jeffrey Bailey, and Fundamentals of Investments, also with Gordon Alexander and Jeffrey Bailey, and now in its third edition. Sharpe also began to study pensions in the post-CAPM portion of his career. In his research, he continued to look into ways in which theoretical concepts can be reduced to methodologies that can be applied by practitioners. For instance, he produced a discrete-time binomial option pricing procedure that offered a readily applicable procedure for BlackScholes securities pricing, which will be covered in the next part of this book. He also developed the Sharpe ratio, a measure of the risk of a mutual or index fund versus its reward. Sharpe continued to work to make financial concepts more democratic and more accessible.

In his Nobel speech, he placed his work as resting somewhere between the simple, elegant, and insightful two period models that had been the hallmark of MIT and Cambridge research on the East Coast for almost two decades and the more abstract general equilibrium models, produced on the other coast, that were the hallmark of Berkeley at the University of California. Merton’s innovation, and his research signature, is the powerful tool of continuous-time modeling. As opposed to all the special cases and indeterminate solutions that discrete-time models provide once even two or three time intervals are connected, the continuous-time models often provided surprisingly simple and analytic solutions, despite the more sophisticated tools necessary to solve them. Because of the simplicity of the solutions and the relatively small numbers of variables he chose to include, his models were also more amenable to econometric verification.


pages: 209 words: 13,138

Empirical Market Microstructure: The Institutions, Economics and Econometrics of Securities Trading by Joel Hasbrouck

Alvin Roth, barriers to entry, business cycle, conceptual framework, correlation coefficient, discrete time, disintermediation, distributed generation, experimental economics, financial intermediation, index arbitrage, information asymmetry, interest rate swap, inventory management, market clearing, market design, market friction, market microstructure, martingale, price discovery process, price discrimination, quantitative trading / quantitative finance, random walk, Richard Thaler, second-price auction, selection bias, short selling, statistical model, stochastic process, stochastic volatility, transaction costs, two-sided market, ultimatum game, zero-sum game

Most microstructure series consist of discrete events randomly arranged in continuous time. Within the timeseries taxonomy, they are formally classified as point processes. Point process characterizations are becomingly increasingly important, but for many purposes it suffices to treat observations as continuous variables realized at regular discrete times. Microstructure data are often well ordered. The sequence of observations in the data set closely corresponds to the sequence in which the economic events actually happened. In contrast, most macroeconomic data are time-aggregated. This gives rise to simultaneity and uncertainty about the directions of causal effects.

Although this model could be estimated by maximum likelihood, actual applications are based on a modified version, described in the next section. 6.2 Event Uncertainty and Poisson Arrivals This model is a variation of the sequential trade model with event uncertainty (section 5.4.4). The principal difference is that agents are not sequentially drawn in discrete time but arrive randomly in continuous time. These events are modeled as a Poisson arrival process. Specifically, suppose that the traders arrive randomly in time such that the probability of an arrival in a time interval of length t is λt where λ is the arrival intensity per unit time, and the probability of two traders arriving in the same interval goes to zero in the limit as t → 0.

Corrections to deal with this bias have been implemented by Madhavan and Cheng (1997) and Bessembinder (2004). 15 Prospective Trading Costs and Execution Strategies In this chapter we discuss minimization of expected implementation cost in two stylized dynamic trading problems. Both analyses are set in discrete time, and in each instance a trader must achieve a purchase by a deadline. The first problem concerns long-term order splitting and timing. A large quantity is to be purchased over a horizon that spans multiple days. Strategic choice involves quantities to be sent to the market at each time, but order choice is not modeled.


High-Frequency Trading by David Easley, Marcos López de Prado, Maureen O'Hara

algorithmic trading, asset allocation, backtesting, Bear Stearns, Brownian motion, capital asset pricing model, computer vision, continuous double auction, dark matter, discrete time, finite state, fixed income, Flash crash, High speed trading, index arbitrage, information asymmetry, interest rate swap, latency arbitrage, margin call, market design, market fragmentation, market fundamentalism, market microstructure, martingale, natural language processing, offshore financial centre, pattern recognition, price discovery process, price discrimination, price stability, quantitative trading / quantitative finance, random walk, Sharpe ratio, statistical arbitrage, statistical model, stochastic process, Tobin tax, transaction costs, two-sided market, yield curve

is disturbed it is likely to re-establish itself, and that provides some amount of short-term price predictivity. The most flexible approach to understanding cointegration is based on the principal components construction of Shintani (2001) and Chigira (2008). In contrast to the traditional approach of Johansen (1991), it does not require estimation of a discrete-time vector autoregressive model; it is extremely flexible and robust for real-time continuous market data. The construction is illustrated in Figures 3.5–3.7. In Figure 3.5 we have extracted two of the price series shown in Figure 3.4, in order to display them against each other on the page. 55 i i i i i i “Easley” — 2013/10/8 — 11:31 — page 56 — #76 i i HIGH-FREQUENCY TRADING In reality, we would do the analysis on the full n-dimensional price series.

The agent-based model has shown to be able to reproduce several empirical features of the high-frequency dynamics of the market microstructure: negative autocorrelation in returns, clustering of trading activity (volatility, traded volume and bid–ask spread), non-linear response of the price change to the traded volume, as well as average shape of the order book and volume imbalances. We shall briefly present the model; for the details we refer the reader to Bartolozzi (2010). The market model evolves in discrete time steps, during which agents may undertake a certain action or just wait for a more profitable opportunity, ie, cancellation or active trading, the latter including both limit and market orders. All decision steps are based on dynamical probabilities, which are functions of private and public information.

We thank Alex Kulesza warmly for his collaboration on the research described in the section on “Predicting Price Movement from the Order Book State”, and to Frank Corrao for his valuable help on many of our collaborative projects. 1 Various types of hidden, iceberg and other order types can limit the complete reconstruction, but do not alter the fundamental picture we describe here. 2 A fair estimate would be that over 90% of placed orders are cancelled. 3 For simplicity, we shall assume a discrete-time model in which time is divided into a finite number of equally spaced trading opportunities. It is straightforward conceptually to generalise to continuous-time models. 4 The case of selling is symmetric. 5 VWAP denotes volume weighted average price, which refers to both the benchmark of trading shares at the market average per share over a specified time period, and algorithms which attempt to achieve or approximate this benchmark. 6 In principle, we might compare our learning approach to state-of-the-art execution algorithms, such as the aforementioned VWAP algorithms used by major brokerages.


Risk Management in Trading by Davis Edwards

asset allocation, asset-backed security, backtesting, Bear Stearns, Black-Scholes formula, Brownian motion, business cycle, computerized trading, correlation coefficient, Credit Default Swap, discrete time, diversified portfolio, fixed income, implied volatility, intangible asset, interest rate swap, iterative process, John Meriwether, London Whale, Long Term Capital Management, margin call, Myron Scholes, Nick Leeson, p-value, paper trading, pattern recognition, random walk, risk free rate, risk tolerance, risk/return, selection bias, shareholder value, Sharpe ratio, short selling, statistical arbitrage, statistical model, stochastic process, systematic trading, time value of money, transaction costs, value at risk, Wiener process, zero-coupon bond

The constant drift term is due to risk‐free inflation (and described later in the chapter in the “time value of money” discussion). Continuous time versions of this process are called Generalized Wiener Process or the Ito Process. (See Equation 3.8, A Stochastic Process.) A stochastic process with discrete time steps can be described as: ΔSt = μΔt + σΔWt St or ΔSt = μSt Δt + σSt ΔWt where ΔSt Change in Price. The change in price that will occur St Price. The price of an asset at time t (3.8) Financial Mathematics μ Drift. The drift term that pushes prices upwards. Commonly, this is a constant, but can be generalized to vary over time Δt Change in Time.

With continuous time steps, this is called the Ornstein–Uhlenbeck process. In mathematical terms, the stock price formula is modified by altering the drift term to incorporate a term that pulls prices back to the mean. (See Equation 3.9, A Mean‐ Reverting Process.) 76 RISK MANAGEMENT IN TRADING A mean reverting process in discrete time steps can be described as: ΔSt = λ(μ − St )Δt + σSt ΔWt (3.9) where ΔSt St λ μ Δt σ ΔWt N(0, 1) Change in Price. The change in the price of the asset Price. The price of an asset at time t Reversion Speed. Higher values for λ will cause the series to revert to the mean more quickly Long Term Mean/Equilibrium Price.

If the curve (a dependent variable or function) is named y and the independent variable is named x, then: ■ ■ ■ dy/dx is the derivative of y with respect to x dy is an infinitesimally small change in y dx is an infinitesimally small change in x To avoid confusion, the standard convention in mathematical finance is never to use the letter d as a variable name. with respect to the input. If the function is called y and the independent variable x, it would be called the “derivative of y with respect to x.” In a series with discrete time steps, the slope of the line would be represented as the formula: slope = Δy/Δx. In other words, the slope at a specific point could be estimated by examining how much the dependent (y) value changes with a change in the independent (x) value. With a continuous line, the terminology changes a bit.


pages: 524 words: 120,182

Complexity: A Guided Tour by Melanie Mitchell

Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, Albert Michelson, Alfred Russel Wallace, anti-communist, Arthur Eddington, Benoit Mandelbrot, bioinformatics, cellular automata, Claude Shannon: information theory, clockwork universe, complexity theory, computer age, conceptual framework, Conway's Game of Life, dark matter, discrete time, double helix, Douglas Hofstadter, en.wikipedia.org, epigenetics, From Mathematics to the Technologies of Life and Death, Garrett Hardin, Geoffrey West, Santa Fe Institute, Gödel, Escher, Bach, Henri Poincaré, invisible hand, Isaac Newton, John Conway, John von Neumann, Long Term Capital Management, mandelbrot fractal, market bubble, Menlo Park, Murray Gell-Mann, Network effects, Norbert Wiener, Norman Macrae, Paul Erdős, peer-to-peer, phenotype, Pierre-Simon Laplace, Ray Kurzweil, reversible computing, scientific worldview, stem cell, The Wealth of Nations by Adam Smith, Thomas Malthus, Tragedy of the Commons, Turing machine

With his postdoctoral mentor Robert May (whom I mentioned in chapter 2 in the context of the logistic map), Nowak performed computer simulations in which the players were placed in a two-dimensional array, and each player played only with its nearest neighbors. This is illustrated in figure 14.4, which shows a five by five grid with one player at each site (Nowak and May’s arrays were considerably larger). Each player has the simplest of strategies—it has no memory of previous turns; it either always cooperates or always defects. The model runs in discrete time steps. At each time step, each player plays a single Prisoner’s Dilemma game against each of its eight nearest neighbors (like a cellular automaton, the grid wraps around at the edges) and its eight resulting scores are summed. This is followed by a selection step in which each player is replaced by the highest scoring player in its neighborhood (possibly itself); no mutation is done.

RANDOM BOOLEAN NETWORKS Kauffman was perhaps the first person to invent and study simplified computer models of genetic regulatory networks. His model was a structure called a Random Boolean Network (RBN), which is an extension of cellular automata. Like any network, an RBN consists of a set of nodes and links between the nodes. Like a cellular automaton, an RBN updates its nodes’ states in discrete time steps. At each time step each node can be in either state on or state off. FIGURE 18.2. (a) A random Boolean network with five nodes. The in-degree (K) of each node is equal to 2. At time step 0, each node is in a random initial state: on (black) or off (white). (b) Time step 1 shows the network after each node has updated its state.

Kauffman himself admits that regarding RBNs as models of genetic regulatory networks requires many unrealistic assumptions: each node can be in only one of two states (whereas gene expression has different degrees of strength), each has an identical number of nodes that regulate it, and all nodes are updated in synchrony at discrete time steps. These simplifications may ignore important details of genetic activity. Most troublesome for his theory are the effects of “noise”—errors and other sources of nondeterministic behavior—that are inevitable in real-world complex systems, including genetic regulation. Biological genetic networks make errors all the time, yet they are resilient—most often our health is not affected by these errors.


pages: 205 words: 20,452

Data Mining in Time Series Databases by Mark Last, Abraham Kandel, Horst Bunke

backpropagation, call centre, computer vision, discrete time, G4S, information retrieval, iterative process, NP-complete, p-value, pattern recognition, random walk, sensor fusion, speech recognition, web application

Non-Stationary Time Series Analysis by Temporal Clustering. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 30(2), 339–343. 54. Pratt, K.B. and Fink, E. (2002). Search for Patterns in Compressed Time Series. International Journal of Image and Graphics, 2(1), 89–106. 55. Pratt, K.B. (2001). Locating patterns in discrete time series. Master’s thesis, Computer Science and Engineering, University of South Florida. 56. Qu, Y., Wang, C., and Wang, X.S. (1998). Supporting Fast Search in Time Series for Movement Patterns in Multiple Scales. Proceedings of the Seventh International Conference on Information and Knowledge Management, pp. 251–258. 57.

The LCSS model matches exact values, however in our model we want to allow more flexible matching between two sequences, when the values are within certain range. Moreover, in certain applications, the stretching that is being provided by the LCSS algorithm needs only to be within a certain range, too. We assume that the measurements of the time-series are at fixed and discrete time intervals. If this is not the case then we can use interpolation [23,34]. Definition 3. Given an integer δ and a real positive number ε, we define the LCSSδ,ε (A, B) as follows:   0 if A or B is empty       1 + LCSSδ,ε (Head(A), Head(B))  LCSSδ,ε (A, B) = if |an − bn | < ε and |n − m| ≤ δ      max(LCSSδ,ε (Head(A), B), LCSSδ,ε (A, Head(B)))     otherwise Indexing Time-Series under Conditions of Noise 77 Fig. 5.


pages: 416 words: 39,022

Asset and Risk Management: Risk Oriented Finance by Louis Esch, Robert Kieffer, Thierry Lopez

asset allocation, Brownian motion, business continuity plan, business process, capital asset pricing model, computer age, corporate governance, discrete time, diversified portfolio, fixed income, implied volatility, index fund, interest rate derivative, iterative process, P = NP, p-value, random walk, risk free rate, risk/return, shareholder value, statistical model, stochastic process, transaction costs, value at risk, Wiener process, yield curve, zero-coupon bond

., Term structure movement and pricing interest rate contingent claims, Journal of Finance, Vol. 41, No. 5., 1986, pp. 1011–29. 19 Heath D., Jarrow R. and Morton A., Bond Pricing and the Term Structure of Interest Rates: a New Methodology, Cornell University, 1987. Heath D., Jarrow R. and Morton A., Bond pricing and the term structure of interest rates: discrete time approximation, Journal of Financial and Quantitative Analysis, Vol. 25, 1990, pp. 419–40. Bonds 139 • the absence of arbitrage opportunity; • hypotheses relating to stochastic processes that govern the evolutions in the state variables v1 , v2 etc. The commonest of the models with just one state variable are the Merton model,20 the Vasicek model21 and the Cox, Ingersoll and Ross model;22 all these use the instant term rate r(t) as the state variable.

The ARMA model is a classical model; it considers that the volumes observed are produced by a random stable process, that is, the statistical properties do not change over the course of time. The variables in the process (that is, mathematical anticipation, valuation–valuation) are independent of time and follow a Gaussian distribution. The variation must also be finished. Volumes will be observed at equidistant moments (case of process in discrete time). We will take as an example the floating-demand savings accounts in LUF/BEF Techniques for Measuring Structural Risks in Balance Sheets 319 from 1996 to 1999, observed monthly (data on CD-ROM). The form given in the model is that of the recurrence system, Volum t = a0 + p ai Volum t−i + εt i=1 where a0 + a1 Volum t−1 + . . . + aP Volum t−p represents the autoregressive model that is ideal or perfectly adjusted to the chronological series, thus being devoid of uncertainty, and εt is a mobile average process. εt = q bi ut−i i=0 The ut−I values constitute ‘white noise’ (following the non-autocorrelated and centred normal random variables with average 0 and standard deviation equal to 1). εt is therefore a centred random variable with constant variance.

., Bond Markets, Analysis and Strategies, Prentice-Hall, 2000. Bibliography 385 Heath D., Jarrow R., and Morton A., Bond Pricing and the Term Structure of Interest Rates: a New Methodology, Cornell University, 1987. Heath D., Jarrow R., and Morton A., Bond pricing and the term structure of interest rates: discrete time approximation, Journal of Financial and Quantitative Analysis, Vol. 25, 1990, pp. 419–40. Ho T. and Lee S., Term structure movement and pricing interest rate contingent claims, Journal of Finance, Vol. 41, No. 5, 1986, pp. 1011–29. Macauley F., Some Theoretical Problems Suggested by the Movements of Interest Rates, Bond Yields and Stock Prices in the United States since 1856, New York, National Bureau of Economic Research, 1938, pp. 44–53.


pages: 721 words: 197,134

Data Mining: Concepts, Models, Methods, and Algorithms by Mehmed Kantardzić

Albert Einstein, algorithmic bias, backpropagation, bioinformatics, business cycle, business intelligence, business process, butter production in bangladesh, combinatorial explosion, computer vision, conceptual framework, correlation coefficient, correlation does not imply causation, data acquisition, discrete time, El Camino Real, fault tolerance, finite state, Gini coefficient, information retrieval, Internet Archive, inventory management, iterative process, knowledge worker, linked data, loose coupling, Menlo Park, natural language processing, Netflix Prize, NP-complete, PageRank, pattern recognition, peer-to-peer, phenotype, random walk, RFID, semantic web, speech recognition, statistical model, Telecommunications Act of 1996, telemarketer, text mining, traveling salesman, web application

Working with series of data in time domain, frequent domain, or other domains, we may use an ANN as a filter to perform three basic information-processing tasks: 1. Filtering. This task refers to the extraction of information about a particular quantity at discrete time n by using data measured up to and including time n. 2. Smoothing. This task differs from filtering in that data need not be available only at time n; data measured later than time n can also be used to obtain the required information. This means that in smoothing there is a delay in producing the result at discrete time n. 3. Prediction. The task of prediction is to forecast data in the future. The aim is to derive information about what the quantity of interest will be like at some time n + n0 in the future, for n0 > 0, by using data measured up to and including time n.

Another factor to be considered in the learning process is the manner in which ANN architecture (nodes and connections) is built. To illustrate one of the learning rules, consider the simple case of a neuron k, shown in Figure 7.1, constituting the only computational node of the network. Neuron k is driven by input vector X(n), where n denotes discrete time, or, more precisely, the time step of the iterative process involved in adjusting the input weights wki. Every data sample for ANN training (learning) consists of the input vector X(n) and the corresponding output d(n). Inputs Output Samplek xk1, xk2, … , xkm dk Processing the input vector X(n), a neuron k produces the output that is denoted by yk(n): It represents the only output of this simple network, and it is compared with a desired response or target output dk(n) given in the sample.

The original sequence is then represented as a concatenation of these symbols, which is known as a “word.” For example, the mapping from PAA (C′) to a word C″ is represented as C″ = (bcccccbaaaaabbccccbb). The main advantage of the SAX method is that 100 different discrete numerical values in an initial discrete time series C is first reduced to 20 different (average) values using PAA, and then they are transformed into only three different categorical values using SAX. The proposed approach is intuitive and simple, yet a powerful methodology in a simplified representation of a large number of different values in time series.


Sam Friedman and Daniel Laurison by The Class Ceiling Why it Pays to be Privileged (2019, Policy Press)

affirmative action, Boris Johnson, discrete time, Donald Trump, Downton Abbey, equal pay for equal work, gender pay gap, gig economy, Gini coefficient, glass ceiling, Hyperloop, if you build it, they will come, income inequality, invisible hand, job satisfaction, knowledge economy, longitudinal study, meta-analysis, microaggression, nudge unit, old-boy network, performance metric, psychological pricing, school choice, Skype, starchitect, The Spirit Level, the strength of weak ties, unpaid internship, upwardly mobile

Occupational class remains, in our view, the best single proxy mobility analysis has at its disposal, as well as providing a measurement tool that can be easily and affordably adopted by organisations wishing to monitor the class origins of their staff, or by researchers – like us in this book – hoping to do so on their behalf. Nonetheless, if we want to take seriously the full range of ways that origins matter for destinations, we need an approach to class that goes beyond single-variable, discrete time-point analyses. We believe that the Bourdieusian-inflected approach we articulate in this chapter (and operationalise throughout this book) – which takes into account income as well as occupation, and uses qualitative methods to get at individual trajectory, 187 The Class Ceiling inherited and accumulated capital and occupationally specific notions of ‘merit’ – goes some way towards realising this.

To explain this we need to return to the issue of the standard mobility table. As we outlined, most quantitative mobility 191 The Class Ceiling research begins by inspecting such tables, comparing a person’s origin and destination class at two points in time. Yet to render class mobility in this way, as measured by one variable at each of two discrete time points, is clearly a simplification of much more complex trajectories.30 This problem can be understood in quite simple terms – imagine we are analysing two 45-year-old accountants, one who has remained in an entry-level role at a small firm and therefore earns £30,000 a year, and one who has reached partner at a large multinational firm, and so earns £250,000 a year.


pages: 247 words: 43,430

Think Complexity by Allen B. Downey

Benoit Mandelbrot, cellular automata, Conway's Game of Life, Craig Reynolds: boids flock, discrete time, en.wikipedia.org, Frank Gehry, Gini coefficient, Guggenheim Bilbao, Laplace demon, mandelbrot fractal, Occupy movement, Paul Erdős, peer-to-peer, Pierre-Simon Laplace, sorting algorithm, stochastic process, strong AI, Thomas Kuhn: the structure of scientific revolutions, Turing complete, Turing machine, Vilfredo Pareto, We are the 99%

The power spectral density is related to the Fourier transform by the following relation: Depending on the application, we may not care about the difference between f and . In that case, we would use the one-sided power spectral density: So far we have assumed that is a continuous function, but often it is a series of values at discrete times. In that case, we can replace the continuous Fourier transform with the discrete Fourier transform (DFT). Suppose that we have N values hk with k in the range from 0 to . The DFT is written Hn, where n is an index related to frequency: 9.1 Each element of this sequence corresponds to a particular frequency.


pages: 543 words: 153,550

Model Thinker: What You Need to Know to Make Data Work for You by Scott E. Page

"Robert Solow", Airbnb, Albert Einstein, Alfred Russel Wallace, algorithmic trading, Alvin Roth, assortative mating, Bernie Madoff, bitcoin, Black Swan, blockchain, business cycle, Capital in the Twenty-First Century by Thomas Piketty, Checklist Manifesto, computer age, corporate governance, correlation does not imply causation, cuban missile crisis, deliberate practice, discrete time, distributed ledger, en.wikipedia.org, Estimating the Reproducibility of Psychological Science, Everything should be made as simple as possible, experimental economics, first-price auction, Flash crash, Geoffrey West, Santa Fe Institute, germ theory of disease, Gini coefficient, High speed trading, impulse control, income inequality, Isaac Newton, John von Neumann, Kenneth Rogoff, knowledge economy, knowledge worker, Long Term Capital Management, loss aversion, low skilled workers, Mark Zuckerberg, market design, meta-analysis, money market fund, Nash equilibrium, natural language processing, Network effects, p-value, Pareto efficiency, pattern recognition, Paul Erdős, Paul Samuelson, phenotype, pre–internet, prisoner's dilemma, race to the bottom, random walk, randomized controlled trial, Richard Feynman, Richard Thaler, school choice, sealed-bid auction, second-price auction, selection bias, six sigma, social graph, spectrum auction, statistical model, Stephen Hawking, Supply of New York City Cabdrivers, The Bell Curve by Richard Herrnstein and Charles Murray, The Great Moderation, The Rise and Fall of American Growth, the rule of 72, the scientific method, The Spirit Level, the strength of weak ties, The Wisdom of Crowds, Thomas Malthus, Thorstein Veblen, Tragedy of the Commons, urban sprawl, value at risk, web application, winner-take-all economy, zero-sum game

First, if the transition function is not at an equilibrium, the value of the Lyapunov function falls by a fixed amount (more on that in a moment). And second, the Lyapunov function has a minimum value. If both assumptions hold, then the dynamical system must attain an equilibrium. Lyapunov Theorem Given a discrete time dynamical system consisting of the transition rule xt+1 = G(xt), the real-valued function F(xt) is a Lyapunov function if F(xt) ≥ M for all xt and if there exists an A > 0 such that If F is a Lyapunov Function for G, then starting from any x0, there exists a t∗, such that G(xt∗) = xt∗, and the system attains an equilibrium in finite time.

We can then assign the following Shapley values for each player in each coalition: Coalition {1, 2}: Player 1: 5, Player 2: 5, Coalition {2, 3}: Player 2: 5, Player 3: 5, Coalition {1, 2, 3}: Player 1: -2, Player 2: -2, Player 3: -2. Summing these values produces Myerson values. Chapter 11: Broadcast, Diffusion, and Contagion 1 The models assume discrete time steps, like days or weeks, and use difference equations that describe the number of infected (or informed) people tomorrow as a function of the number infected (or informed) today. Continuous time models require differential equations and calculus. None of the results of our models would change qualitatively if we switch to continuous time. 2 Plugging the first equation into the second gives the following expression: 36, 000 = 20, 000 + 20, 000 − Pbroad · 20, 000, which reduces to 4, 000 = Pbroad · 20,000, so Pbroad = 0.2 and NPOP = 100, 000. 3 See Griliches 1988. 4 Initial sales, I, equal 100 for each app.


pages: 246 words: 16,997

Financial Modelling in Python by Shayne Fletcher, Christopher Gardner

Brownian motion, discrete time, functional programming, interest rate derivative, London Interbank Offered Rate, stochastic volatility, yield curve, zero day, zero-coupon bond

In particular this means that we can re-use the fil component already discussed for the Monte-Carlo model. t In the terminal measure there is only one state variable that needs to be evolved: namely, 0 C(s)dW (s). In a typical application, the evolve step will be peformed on a discrete set of contiguous times. In other words, suppose we have the discrete times {T1 , T2 , . . . , Ti } with the T time today denoted by T0 , then the simulation of 0 i C(s)dW (s) is carried out as the discrete sum shown below Ti Tk+1 i C(s)dW (s) = C 2 (s)ds Z K (8.2) 0 k=0 Tk with Z 1 , Z 2 , . . . independent, identical distributed normal variates with distribution N (0, 1).


pages: 210 words: 62,771

Turing's Vision: The Birth of Computer Science by Chris Bernhardt

Ada Lovelace, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, Andrew Wiles, British Empire, cellular automata, Claude Shannon: information theory, complexity theory, Conway's Game of Life, discrete time, Douglas Hofstadter, Georg Cantor, Gödel, Escher, Bach, Henri Poincaré, Internet Archive, Jacquard loom, John Conway, John von Neumann, Joseph-Marie Jacquard, Norbert Wiener, Paul Erdős, Turing complete, Turing machine, Turing test, Von Neumann architecture

Each cell can have one of a number of states. We will only look at cases where there are just two states, which we will denote by white and black. This means that the tape will be divided into an infinite number of cells, which we depict as squares, each of which is colored either black or white. The computation takes place at discrete time intervals. The initial tape is given to us at time 0. It first gets updated at time 1, then at time 2 and so on. Even though it is really one tape that is evolving at each time interval, it is easier to describe each instance as a separate tape. Given a tape, the subsequent tape, one unit of time later, is given by an updating rule that for each cell looks at the state of that cell and the states of some of its neighbors on the initial tape and gives the cell’s new state on the updated tape.


pages: 223 words: 60,936

Remote Work Revolution: Succeeding From Anywhere by Tsedal Neeley

Airbnb, Boycotts of Israel, call centre, cloud computing, coronavirus, Covid-19, COVID-19, cryptocurrency, discrete time, Donald Trump, future of work, global pandemic, iterative process, job satisfaction, knowledge worker, Lean Startup, mass immigration, natural language processing, remote work: asynchronous communication, remote working, Silicon Valley

As Tyler pointed out, collocation was crucial to the teams’ freedom to improvise and innovate through organic exchanges. A casual conversation about a Netflix show might lead to an intense brainstorming session about a new project. But in the virtual mode, these natural rhythms were inaccessible. People’s interactions were now confined to a combination of text, audio, and video at discrete times of the day. Casual conversation and nonverbal cues such as hand gestures and facial expressions—key elements of face-to-face interaction in the workplace—were lost in translation. Hawkins noticed that the absence of these nonverbal cues made it more difficult for team members to know when to speak up and when to listen.


pages: 442 words: 39,064

Why Stock Markets Crash: Critical Events in Complex Financial Systems by Didier Sornette

Asian financial crisis, asset allocation, Berlin Wall, Bretton Woods, Brownian motion, business cycle, buy and hold, buy the rumour, sell the news, capital asset pricing model, capital controls, continuous double auction, currency peg, Deng Xiaoping, discrete time, diversified portfolio, Elliott wave, Erdős number, experimental economics, financial innovation, floating exchange rates, frictionless, frictionless market, full employment, global village, implied volatility, index fund, information asymmetry, intangible asset, invisible hand, John von Neumann, joint-stock company, law of one price, Louis Bachelier, mandelbrot fractal, margin call, market bubble, market clearing, market design, market fundamentalism, mental accounting, moral hazard, Network effects, new economy, oil shock, open economy, pattern recognition, Paul Erdős, Paul Samuelson, quantitative trading / quantitative finance, random walk, risk/return, Ronald Reagan, Schrödinger's Cat, selection bias, short selling, Silicon Valley, South Sea Bubble, statistical model, stochastic process, stocks for the long run, Tacoma Narrows Bridge, technological singularity, The Coming Technological Singularity, The Wealth of Nations by Adam Smith, Tobin tax, total factor productivity, transaction costs, tulip mania, VA Linux, Y2K, yield curve

The following reasoning allows us to understand intuitively the origin of the appearance of an infinite slope or infinite value in a finite time at tc , called a finite-time singularity. Suppose, for instance, that the growth rate of the hazard rate doubles when the hazard rate doubles. For simplicity, we consider discrete-time intervals as follows. Starting with a hazard rate of 1 per unit time, we assume it grows at a constant rate of 1% per day until it doubles. We estimate the doubling time as proportional to the inverse of the growth rate, that is, approximately 1/1% = 1/001 = 100 days. There is a multiplicative correction term equal to ln 2 = 069 such that the doubling time is ln 2/1% = 69 days.

The faster-than-exponential growths observed in Figures 10.1 and 10.2 correspond to nonconstant growth rates, which increase with population or with the size of economic factors. Suppose, for instance, that the growth rate of the population doubles when the population doubles. For simplicity, we consider discrete time 2 05 0: the end of t h e g r o w t h e r a? 365 intervals as follows. Starting with a population of 1,000, we assume it grows at a constant rate of 1% per year until it doubles. We estimate the doubling time as proportional to the inverse of the growth rate, that is, approximately 1/1% = 1/0.01 = 100 years.


pages: 681 words: 64,159

Numpy Beginner's Guide - Third Edition by Ivan Idris

algorithmic trading, business intelligence, Conway's Game of Life, correlation coefficient, Debian, discrete time, en.wikipedia.org, functional programming, general-purpose programming language, Khan Academy, p-value, random walk, reversible computing, time value of money

We ploted the closing price for QQQ with a trend line (see trend.py ): from matplotlib.finance import quotes_historical_yahoo from datetime import date import numpy as np from scipy import signal import matplotlib.pyplot as plt from matplotlib.dates import DateFormatter from matplotlib.dates import DayLocator from matplotlib.dates import MonthLocator today = date.today() start = (today.year - 1, today.month, today.day) quotes = quotes_historical_yahoo("QQQ", start, today) quotes = np.array(quotes) dates = quotes.T[0] qqq = quotes.T[4] y = signal.detrend(qqq) alldays = DayLocator() months = MonthLocator() month_formatter = DateFormatter("%b %Y") fig = plt.figure() ax = fig.add_subplot(111) plt.title('QQQ close price with trend') plt.ylabel('Close price') plt.plot(dates, qqq, 'o', dates, qqq - y, '-') ax.xaxis.set_minor_locator(alldays) ax.xaxis.set_major_locator(months) ax.xaxis.set_major_formatter(month_formatter) fig.autofmt_xdate() plt.grid() plt.show() Fourier analysis Signals in the real world ofen have a periodic nature. A commonly used tool to deal with these signals is the Discrete Fourier transform (see https://en.wikipedia. org/wiki/Discrete-time_Fourier_transform ). The Discrete Fourier transform is a transformaton from the tme domain into the frequency domain, that is, the linear decompositon of a periodic signal into sine and cosine functons with various frequencies: Functons for Fourier transforms can be found in the scipy.fftpack module (NumPy also has its own Fourier package numpy.fft ).


pages: 741 words: 179,454

Extreme Money: Masters of the Universe and the Cult of Risk by Satyajit Das

affirmative action, Albert Einstein, algorithmic trading, Andy Kessler, Asian financial crisis, asset allocation, asset-backed security, bank run, banking crisis, banks create money, Basel III, Bear Stearns, Benoit Mandelbrot, Berlin Wall, Bernie Madoff, Big bang: deregulation of the City of London, Black Swan, Bonfire of the Vanities, bonus culture, Bretton Woods, BRICs, British Empire, business cycle, buy the rumour, sell the news, capital asset pricing model, Carmen Reinhart, carried interest, Celtic Tiger, clean water, cognitive dissonance, collapse of Lehman Brothers, collateralized debt obligation, corporate governance, corporate raider, creative destruction, credit crunch, Credit Default Swap, credit default swaps / collateralized debt obligations, Daniel Kahneman / Amos Tversky, debt deflation, Deng Xiaoping, deskilling, discrete time, diversification, diversified portfolio, Doomsday Clock, Edward Thorp, Emanuel Derman, en.wikipedia.org, Eugene Fama: efficient market hypothesis, eurozone crisis, Everybody Ought to Be Rich, Fall of the Berlin Wall, financial independence, financial innovation, financial thriller, fixed income, foreign exchange controls, full employment, global reserve currency, Goldman Sachs: Vampire Squid, Gordon Gekko, greed is good, happiness index / gross national happiness, haute cuisine, high net worth, Hyman Minsky, index fund, information asymmetry, interest rate swap, invention of the wheel, invisible hand, Isaac Newton, James Carville said: "I would like to be reincarnated as the bond market. You can intimidate everybody.", job automation, Johann Wolfgang von Goethe, John Meriwether, joint-stock company, Jones Act, Joseph Schumpeter, Kenneth Arrow, Kenneth Rogoff, Kevin Kelly, laissez-faire capitalism, load shedding, locking in a profit, Long Term Capital Management, Louis Bachelier, margin call, market bubble, market fundamentalism, Marshall McLuhan, Martin Wolf, mega-rich, merger arbitrage, Mikhail Gorbachev, Milgram experiment, money market fund, Mont Pelerin Society, moral hazard, mortgage debt, mortgage tax deduction, mutually assured destruction, Myron Scholes, Naomi Klein, National Debt Clock, negative equity, NetJets, Network effects, new economy, Nick Leeson, Nixon shock, Northern Rock, nuclear winter, oil shock, Own Your Own Home, Paul Samuelson, pets.com, Philip Mirowski, Plutocrats, plutocrats, Ponzi scheme, price anchoring, price stability, profit maximization, quantitative easing, quantitative trading / quantitative finance, Ralph Nader, RAND corporation, random walk, Ray Kurzweil, regulatory arbitrage, rent control, rent-seeking, reserve currency, Richard Feynman, Richard Thaler, Right to Buy, risk free rate, risk-adjusted returns, risk/return, road to serfdom, Robert Shiller, Robert Shiller, Rod Stewart played at Stephen Schwarzman birthday party, rolodex, Ronald Reagan, Ronald Reagan: Tear down this wall, Satyajit Das, savings glut, shareholder value, Sharpe ratio, short selling, Silicon Valley, six sigma, Slavoj Žižek, South Sea Bubble, special economic zone, statistical model, Stephen Hawking, Steve Jobs, survivorship bias, tail risk, The Chicago School, The Great Moderation, the market place, the medium is the message, The Myth of the Rational Market, The Nature of the Firm, the new new thing, The Predators' Ball, The Theory of the Leisure Class by Thorstein Veblen, The Wealth of Nations by Adam Smith, Thorstein Veblen, too big to fail, trickle-down economics, Turing test, two and twenty, Upton Sinclair, value at risk, Yogi Berra, zero-coupon bond, zero-sum game

By adjusting the ratio of options to the shares, you can construct a portfolio where the changes in the value of the options and shares exactly offset, at least, for small movements in the stock price. Working as research assistant to Paul Samuelson, Robert Merton was also working on option pricing. Merton introduced an idea—continuous time mathematics. Black and Scholes assumed that the portfolio would be rebalanced to keep it free of risk by changing the number of shares held at discrete time intervals. Merton forced the time intervals into infinitely small fragments of time, effectively allowing continuous and instantaneous rebalancing. Although unrealistic and practically impossible, this allowed a mathematical solution using Ito’s (pronounced “Eto”) Lemma to solve the equation. Ito, an eccentric Japanese mathematician, later did not remember deriving the eponymous technique.

See also exotic products AIG, 230-234 arbitrage, 242 central banks, 281-282 deconstruction, 235-236 first-to-default (FtD) swaps, 220-221 Harvard case studies, 214-215 hedging, 216-217 Italy, 215-216 Jerome Kerviel, 226-230 managing risk, 124 markets, 235, 334 municipal bonds, 211-214 price movements, 210-211 risk, 218-219 design of, 225 Fiat, 222-223 Greece, 223, 225 sale of to ordinary investors, 332-333 sovereign debt, 236-238 TARDIS trades, 217-218 TOBs (tender option bonds), 222 Derman, Emanuel, 309 Derrida, Jacques, 236 Descartes, René, 228 Detroit, 42 Deutsche Bank, 79, 195, 272, 312 Devaney, John, 255 di Lampedusa, Giuseppe, 353 digitals, 211 Dillion Read, 201 Dimon, Jamie, 283, 290 dinars, 21 Diners Club, 71 Dirac, Paul, 104 dirty bombs, 26 disaster capitalism, 342 discrete time intervals, 121 dismal science, 102-104 dispersion swaps, 255 distressed debt trading, 242 distributions, normal, 126 diversification, 122-124 dividends, 119 Dixon, Geoff, 156 documentation requirements, 181 DOG (debt overburdened group), 161 dollars American, 21-22, 28, 87 aussies, 21 kiwis, 21 Zimbabwe, 23 domain knowledge, 64 domestic corporate profits, United States, 276 Dominion Bond Rating Service, 283 doomsday clock, 34 Dorgan, Bryan, 67 Douglas, Michael, 167, 310 Dow 36,000, 99 Dow 40,000: Strategies for Profiting from the Greatest Bull Market in History, 97 Dow 100,000, 97 Dow Jones Industrial Average (DJIA), 89, 97, 126 Dr.


pages: 320 words: 24,110

Elements of Mathematics for Economics and Finance by Vassilis C. Mavron, Timothy N. Phillips

constrained optimization, discrete time, the market place

The divergence is uniform (see Fig. 12.5). 270 Elements of Mathematics for Economics and Finance X 30 20 10 0 1 2 3 4 5 t -10 Figure 12.4 divergence. Graph of the solution of Example 12.3.4 showing oscillatory 12.5 The Cobweb Model The Cobweb model is an economic model for analysing periodic fluctuations in price, supply, and demand that oscillate towards equilibrium. It is assumed that the quantities involved change only at discrete time intervals and that there is a time lag in the response of suppliers to price changes. For instance, the supply this year of a particular agricultural product depends on the price obtained from the previous year’s harvest. The demand for the produce will depend of course on this year’s price. Another example is that of package holidays.


A Primer for the Mathematics of Financial Engineering by Dan Stefanica

asset allocation, Black-Scholes formula, capital asset pricing model, constrained optimization, delta neutral, discrete time, Emanuel Derman, implied volatility, law of one price, margin call, quantitative trading / quantitative finance, risk free rate, Sharpe ratio, short selling, time value of money, transaction costs, volatility smile, yield curve, zero-coupon bond

Bond Duration and Bond Convexity. The term exp ( - ° The zero rate r(O, t) between time and time t is the rate of return of a cash deposit made at time and maturing at time t. If specified for all values of t, then r(O, t) is called the zero rate curve 2 and is a continuous function of t. Interest can be compounded at discrete time intervals, e.g., annually, semiannually, monthly, etc., or can be compounded continuously. Unless otherwise specified, we assume that interest is compounded continuously. For continuously compounded interest, the value at time t of B(O) currency units (e.g., U.S. dollars) is where exp(x) = eX.


pages: 321

Finding Alphas: A Quantitative Approach to Building Trading Strategies by Igor Tulchinsky

algorithmic trading, asset allocation, automated trading system, backpropagation, backtesting, barriers to entry, business cycle, buy and hold, capital asset pricing model, constrained optimization, corporate governance, correlation coefficient, credit crunch, Credit Default Swap, discounted cash flows, discrete time, diversification, diversified portfolio, Eugene Fama: efficient market hypothesis, financial intermediation, Flash crash, implied volatility, index arbitrage, index fund, intangible asset, iterative process, Long Term Capital Management, loss aversion, market design, market microstructure, merger arbitrage, natural language processing, passive investing, pattern recognition, performance metric, Performance of Mutual Funds in the Period, popular capitalism, prediction markets, price discovery process, profit motive, quantitative trading / quantitative finance, random walk, Renaissance Technologies, risk free rate, risk tolerance, risk-adjusted returns, risk/return, selection bias, sentiment analysis, shareholder value, Sharpe ratio, short selling, Silicon Valley, speech recognition, statistical arbitrage, statistical model, stochastic process, survivorship bias, systematic trading, text mining, transaction costs, Vanguard fund, yield curve

In predictive modeling, knowing how to walk this fine line makes all the difference. That said, let’s take a look at some of the mathematical techniques and algorithms that should be part of your quant tool kit. We will go through the underlying intuition and the practical use cases for these algorithms. DIGITAL FILTERS Digital filters perform mathematical operations on discrete time signals to attenuate or amplify certain frequencies. These mathematical transforms are characterized by transfer functions that describe how they respond to various inputs. As such, digital filter design involves expressing performance specifications in the form of a suitable transfer function.


pages: 319 words: 90,965

The End of College: Creating the Future of Learning and the University of Everywhere by Kevin Carey

Albert Einstein, barriers to entry, Bayesian statistics, Berlin Wall, business cycle, business intelligence, carbon-based life, Claude Shannon: information theory, complexity theory, David Heinemeier Hansson, declining real wages, deliberate practice, discrete time, disruptive innovation, double helix, Douglas Engelbart, Douglas Engelbart, Downton Abbey, Drosophila, Firefox, Frank Gehry, Google X / Alphabet X, informal economy, invention of the printing press, inventory management, John Markoff, Khan Academy, Kickstarter, low skilled workers, Lyft, Marc Andreessen, Mark Zuckerberg, meta-analysis, natural language processing, Network effects, open borders, pattern recognition, Peter Thiel, pez dispenser, ride hailing / ride sharing, Ronald Reagan, Ruby on Rails, Sand Hill Road, self-driving car, Silicon Valley, Silicon Valley startup, social web, South of Market, San Francisco, speech recognition, Steve Jobs, technoutopianism, transcontinental railway, uber lyft, Vannevar Bush

By 2014, edX was offering hundreds of free online courses in subjects including the Poetry of Walt Whitman, the History of Early Christianity, Computational Neuroscience, Flight Vehicle Aerodynamics, Shakespeare, Dante’s Divine Comedy, Bioethics, Contemporary India, Historical Relic Treasures and Cultural China, Linear Algebra, Autonomous Mobile Robots, Electricity and Magnetism, Discrete Time Signals and Systems, Introduction to Global Sociology, Behavioral Economics, Fundamentals of Immunology, Computational Thinking and Data Science, and an astrophysics course titled Greatest Unsolved Mysteries of the Universe. Doing this seemed to contradict five hundred years of higher-education economics in which the wealthiest and most sought-after colleges enforced a rigid scarcity over their products and services.


The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences by Rob Kitchin

Bayesian statistics, business intelligence, business process, cellular automata, Celtic Tiger, cloud computing, collateralized debt obligation, conceptual framework, congestion charging, corporate governance, correlation does not imply causation, crowdsourcing, discrete time, disruptive innovation, George Gilder, Google Earth, Infrastructure as a Service, Internet Archive, Internet of things, invisible hand, knowledge economy, late capitalism, lifelogging, linked data, longitudinal study, Masdar, means of production, Nate Silver, natural language processing, openstreetmap, pattern recognition, platform as a service, recommendation engine, RFID, semantic web, sentiment analysis, slashdot, smart cities, Smart Cities: Big Data, Civic Hackers, and the Quest for a New Utopia, smart grid, smart meter, software as a service, statistical model, supply-chain management, the scientific method, The Signal and the Noise by Nate Silver, transaction costs

The interlinking of data in Obama’s campaign created what Crampton et al. (2012) term an ‘information amplifier effect’, wherein the sum of data is more than the parts. Velocity A fundamental difference between small and big data is the dynamic nature of data generation. Small data usually consist of studies that are freeze-framed at a particular time and space. Even in longitudinal studies, the data are captured at discrete times (e.g., every few months or years). For example, censuses are generally conducted every five or ten years. In contrast, big data are generated on a much more continuous basis, in many cases in real-time or near to real-time. Rather than a sporadic trickle of data, laboriously harvested or processed, data are flowing at speed.


The Fractalist by Benoit Mandelbrot

Albert Einstein, Benoit Mandelbrot, Brownian motion, business cycle, Claude Shannon: information theory, discrete time, double helix, Georg Cantor, Henri Poincaré, Honoré de Balzac, illegal immigration, Isaac Newton, iterative process, Johannes Kepler, John von Neumann, linear programming, Louis Bachelier, Louis Blériot, Louis Pasteur, mandelbrot fractal, New Journalism, Norbert Wiener, Olbers’ paradox, Paul Lévy, Richard Feynman, statistical model, urban renewal, Vilfredo Pareto

In mathematical lingo, this is a quadratic map, something close to an ancient curve called a parabola. But in the Mandelbrot set, z denotes a point in the plane, and the formula expresses how a point’s position at some instant in time defines its position at the next instant. Again, in mathematical lingo, this formula defines the very simplest form of dynamics in discrete time—a form called quadratic dynamics. Fine, the formula is indeed breathtakingly simple. So why bother? This formula is then iterated—that is, repeated with no end—defining with increasing refinement a shape that can be approximated using a very simple computer program. To wide surprise, this shape is both overwhelmingly rich in detail and minutely subtle, and it continues to provide a common and fertile ground for exploration: from Brahman mathematicians to students and those in the earthy lower castes, from artists to the merely curious.


pages: 287 words: 86,870

The Glass Hotel by Emily St. John Mandel

Bernie Madoff, big-box store, discrete time, East Village, high net worth, McMansion, Panamax, Pepto Bismol, Ponzi scheme, sovereign wealth fund, white picket fence, Y2K

The room was a single, because one of the very few things he and his mother had agreed on was that it would be disastrous if Paul had a roommate and the roommate was into opioids, so he was almost always alone. The room was so small that he was claustrophobic unless he sat directly in front of the window. His interactions with other people were few and superficial. There was a dark cloud of exams on the near horizon, but studying was hopeless. He kept trying to focus on probability theory and discrete-time martingales, but his thoughts kept sliding toward a piano composition that he knew he’d never finish, this very straightforward C-major situation except with little flights of destabilizing minor chords. In early December he walked out of the library at the same time as Tim, who was in two of his classes and also preferred the last row of the lecture hall.


pages: 550 words: 89,316

The Sum of Small Things: A Theory of the Aspirational Class by Elizabeth Currid-Halkett

assortative mating, back-to-the-land, barriers to entry, Bernie Sanders, BRICs, Capital in the Twenty-First Century by Thomas Piketty, clean water, cognitive dissonance, David Brooks, deindustrialization, Deng Xiaoping, discrete time, disruptive innovation, Downton Abbey, East Village, Edward Glaeser, en.wikipedia.org, Etonian, Geoffrey West, Santa Fe Institute, income inequality, iterative process, knowledge economy, longitudinal study, Mason jar, means of production, NetJets, new economy, New Urbanism, Plutocrats, plutocrats, post scarcity, post-industrial society, profit maximization, Richard Florida, selection bias, Silicon Valley, The Design of Experiments, the High Line, The inhabitant of London could order by telephone, sipping his morning tea in bed, the various products of the whole earth, the market place, The Theory of the Leisure Class by Thorstein Veblen, Thorstein Veblen, Tony Hsieh, Tyler Cowen: Great Stagnation, upwardly mobile, Veblen good, women in the workforce

Report on deadly factory collapse in Bangladesh finds widespread blame. New York Times. Retrieved from http://www.nytimes.com/2013/05/23/world/asia/report-on-bangladesh-building-collapse-finds-widespread-blame.html. Yoon, H., & Currid-Halkett, E. (2014). Industrial gentrification in West Chelsea, New York: Who survived and who did not? Empirical evidence from discrete-time survival analysis. Urban Studies 52(1), 20–49. doi:10.1177/0042098014536785. Young, M. (2014, March 9). SOMA: The stubborn uncoolness of San Francisco style. New York Magazine. Retrieved from http://nymag.com/news/features/san-francisco-style-2014-3/. Yueh, L. (2013, June 18). The rise of the global middle class.


pages: 554 words: 108,035

Scala in Depth by Tom Kleenex, Joshua Suereth

discrete time, domain-specific language, fault tolerance, functional programming, MVC pattern, sorting algorithm, type inference

Example: A timeline library We’d like to construct a time line, or calendar, widget. This widget needs to display dates, times, and time ranges as well as associated events with each day. The fundamental concept in this library is going to be an InstantaneousTime. InstantaneousTime is a class that represents a particular discrete time within the time series. We could use the java.util.Date class, but we’d prefer something that’s immutable, as we’ve just learned how this can help simplify writing good equals and hashCode methods. In an effort to keep things simple, let’s have our underlying time storage be an integer of seconds since midnight, January 1, 1970, Greenwich Mean Time on a Gregorian calendar.


pages: 338 words: 104,684

The Deficit Myth: Modern Monetary Theory and the Birth of the People's Economy by Stephanie Kelton

2013 Report for America's Infrastructure - American Society of Civil Engineers - 19 March 2013, Affordable Care Act / Obamacare, American Society of Civil Engineers: Report Card, Asian financial crisis, bank run, Bernie Madoff, Bernie Sanders, blockchain, bond market vigilante , Bretton Woods, business cycle, capital controls, central bank independence, collective bargaining, Covid-19, COVID-19, currency manipulation / currency intervention, currency peg, David Graeber, David Ricardo: comparative advantage, decarbonisation, deindustrialization, discrete time, Donald Trump, eurozone crisis, fiat currency, floating exchange rates, Food sovereignty, full employment, Gini coefficient, global reserve currency, global supply chain, Hyman Minsky, income inequality, inflation targeting, Intergovernmental Panel on Climate Change (IPCC), investor state dispute settlement, Isaac Newton, Jeff Bezos, liquidity trap, Mahatma Gandhi, manufacturing employment, market bubble, Mason jar, Modern Monetary Theory, mortgage debt, Naomi Klein, National Debt Clock, new economy, New Urbanism, Nixon shock, Nixon triggered the end of the Bretton Woods system, obamacare, open economy, Paul Samuelson, Ponzi scheme, Post-Keynesian economics, price anchoring, price stability, pushing on a string, quantitative easing, race to the bottom, reserve currency, Richard Florida, Ronald Reagan, San Francisco homelessness, shareholder value, Silicon Valley, Tax Reform Act of 1986, trade liberalization, urban planning, working-age population, Works Progress Administration, yield curve, zero-sum game

In 1998, I published an early draft of the paper, and two years later a more polished version became my first peer-reviewed, academic publication.21 The answer to the question I had posed was no. It’s not easy to see how it all works. In fact, it’s impossible to disentangle the government’s monetary operations in discrete time. On any given day, there are, literally, millions of moving parts. Throughout the year, the Federal Reserve handles trillions of dollars in US Treasury payments. Each month, millions of households and businesses write checks to Uncle Sam, and those payments clear between commercial banks and the Federal Reserve.22 The Treasury, the Federal Reserve, and the primary dealers coordinate about when to auction Treasuries, what mix of maturities to offer, and how many total securities to offer at each auction.


The Global Money Markets by Frank J. Fabozzi, Steven V. Mann, Moorad Choudhry

asset allocation, asset-backed security, bank run, Bear Stearns, Bretton Woods, buy and hold, collateralized debt obligation, credit crunch, discounted cash flows, discrete time, disintermediation, fixed income, high net worth, intangible asset, interest rate derivative, interest rate swap, large denomination, locking in a profit, London Interbank Offered Rate, Long Term Capital Management, margin call, market fundamentalism, money market fund, moral hazard, mortgage debt, paper trading, Right to Buy, short selling, stocks for the long run, time value of money, value at risk, Y2K, yield curve, zero-coupon bond, zero-sum game

This funding is then a variable-rate liability and is the bank’s risk, unless the rate has been locked-in beforehand. The same assumption applies when the banks run a cash surplus position, and the interest rate for any period in the future is unknown. The gap position at a given time bucket is sensitive to the interest rate that applies to that period. The gap is calculated for each discrete time bucket, so there is a net exposure for say, 0–1 month, 1–3 months, and so on. Loans and deposits do not, except at the time of being undertaken, have precise maturities like that, so they are “mapped” to a time bucket in terms of their relative weighting. For example, a $100 million deposit that matures in 20 days’ time will have most of its balance mapped to the 3-week time bucket, but a smaller amount will also be allocated to the 2-week bucket.


pages: 492 words: 118,882

The Blockchain Alternative: Rethinking Macroeconomic Policy and Economic Theory by Kariappa Bheemaiah

accounting loophole / creative accounting, Ada Lovelace, Airbnb, algorithmic trading, asset allocation, autonomous vehicles, balance sheet recession, bank run, banks create money, Basel III, basic income, Ben Bernanke: helicopter money, bitcoin, blockchain, Bretton Woods, business cycle, business process, call centre, capital controls, Capital in the Twenty-First Century by Thomas Piketty, cashless society, cellular automata, central bank independence, Claude Shannon: information theory, cloud computing, cognitive dissonance, collateralized debt obligation, commoditize, complexity theory, constrained optimization, corporate governance, credit crunch, Credit Default Swap, credit default swaps / collateralized debt obligations, crowdsourcing, cryptocurrency, David Graeber, deskilling, Diane Coyle, discrete time, disruptive innovation, distributed ledger, diversification, double entry bookkeeping, Ethereum, ethereum blockchain, fiat currency, financial innovation, financial intermediation, Flash crash, floating exchange rates, Fractional reserve banking, full employment, George Akerlof, illegal immigration, income inequality, income per capita, inflation targeting, information asymmetry, interest rate derivative, inventory management, invisible hand, John Maynard Keynes: technological unemployment, John von Neumann, joint-stock company, Joseph Schumpeter, Kenneth Arrow, Kenneth Rogoff, Kevin Kelly, knowledge economy, large denomination, liquidity trap, London Whale, low skilled workers, M-Pesa, Marc Andreessen, market bubble, market fundamentalism, Mexican peso crisis / tequila crisis, MITM: man-in-the-middle, Money creation, money market fund, money: store of value / unit of account / medium of exchange, mortgage debt, natural language processing, Network effects, new economy, Nikolai Kondratiev, offshore financial centre, packet switching, Pareto efficiency, pattern recognition, peer-to-peer lending, Ponzi scheme, precariat, pre–internet, price mechanism, price stability, private sector deleveraging, profit maximization, QR code, quantitative easing, quantitative trading / quantitative finance, Ray Kurzweil, Real Time Gross Settlement, rent control, rent-seeking, Satoshi Nakamoto, Satyajit Das, Savings and loan crisis, savings glut, seigniorage, Silicon Valley, Skype, smart contracts, software as a service, software is eating the world, speech recognition, statistical model, Stephen Hawking, supply-chain management, technology bubble, The Chicago School, The Future of Employment, The Great Moderation, the market place, The Nature of the Firm, the payments system, the scientific method, The Wealth of Nations by Adam Smith, Thomas Kuhn: the structure of scientific revolutions, too big to fail, trade liberalization, transaction costs, Turing machine, Turing test, universal basic income, Von Neumann architecture, Washington Consensus

Consider our previous stock market example - it is easier to think about the behavioural traits of an individual stock broking agent rather than think about how news of that agent winning the lottery will affect throes of agents who are related to this agent. In fact, the modeler does not even have to go into excruciating detail regarding the individual agents’ traits. Starting with some initial hypotheses, the modeller can generate a model that represents these hypotheses. As the dynamics of the system evolve over discrete time steps, the results can be tested for validity and if they are representative of real world phenomenon, a proof of concept is formed. The advantage of this approach is that it can be employed to study general properties of a system which are not sensitive to the initial conditions, or to study the dynamics of a specific system with fairly well- known initial conditions, e. g. the impact of the baby boomers’ retirement on the US stock market (Bandini et al., 2012).


pages: 396 words: 112,748

Chaos: Making a New Science by James Gleick

Benoit Mandelbrot, business cycle, butterfly effect, cellular automata, Claude Shannon: information theory, discrete time, Edward Lorenz: Chaos theory, experimental subject, Georg Cantor, Henri Poincaré, Herbert Marcuse, Isaac Newton, iterative process, John von Neumann, Louis Pasteur, mandelbrot fractal, Murray Gell-Mann, Norbert Wiener, pattern recognition, Richard Feynman, Stephen Hawking, stochastic process, trade route

Population biology learned quite a bit about the history of life, how predators interact with their prey, how a change in a country’s population density affects the spread of disease. If a certain mathematical model surged ahead, or reached equilibrium, or died out, ecologists could guess something about the circumstances in which a real population or epidemic would do the same. One helpful simplification was to model the world in terms of discrete time intervals, like a watch hand that jerks forward second by second instead of gliding continuously. Differential equations describe processes that change smoothly over time, but differential equations are hard to compute. Simpler equations—“difference equations”—can be used for processes that jump from state to state.


pages: 404 words: 43,442

The Art of R Programming by Norman Matloff

Debian, discrete time, Donald Knuth, functional programming, general-purpose programming language, linked data, sorting algorithm, statistical model

Table 15-1: Common GDB Commands Command Description l b r n s p c h q List code lines Set breakpoint Run/rerun Step to next statement Step into function call Print variable or expression Continue Help Quit 15.1.5 Extended Example: Prediction of Discrete-Valued Time Series Recall our example in Section 2.5.2 where we observed 0- and 1-valued data, one per time period, and attempted to predict the value in any period from the previous k values, using majority rule. We developed two competing functions for the job, preda() and predb(), as follows: # prediction in discrete time series; 0s and 1s; use k consecutive # observations to predict the next, using majority rule; calculate the # error rate preda <- function(x,k) { n <- length(x) k2 <- k/2 # the vector pred will contain our predicted values pred <- vector(length=n-k) Interfacing R to Other Languages 327 for (i in 1:(n-k)) { if (sum(x[i:(i+(k-1))]) >= k2) pred[i] <- 1 else pred[i] <- 0 } return(mean(abs(pred-x[(k+1):n]))) } predb <- function(x,k) { n <- length(x) k2 <- k/2 pred <- vector(length=n-k) sm <- sum(x[1:k]) if (sm >= k2) pred[1] <- 1 else pred[1] <- 0 if (n-k >= 2) { for (i in 2:(n-k)) { sm <- sm + x[i+k-1] - x[i-1] if (sm >= k2) pred[i] <- 1 else pred[i] <- 0 } } return(mean(abs(pred-x[(k+1):n]))) } Since the latter avoids duplicate computation, we speculated it would be faster.


pages: 481 words: 125,946

What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence by John Brockman

agricultural Revolution, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, algorithmic trading, artificial general intelligence, augmented reality, autonomous vehicles, backpropagation, basic income, bitcoin, blockchain, clean water, cognitive dissonance, Colonization of Mars, complexity theory, computer age, computer vision, constrained optimization, corporate personhood, cosmological principle, cryptocurrency, cuban missile crisis, Danny Hillis, dark matter, discrete time, Douglas Engelbart, Elon Musk, Emanuel Derman, endowment effect, epigenetics, Ernest Rutherford, experimental economics, Flash crash, friendly AI, functional fixedness, global pandemic, Google Glasses, hive mind, Ian Bogost, income inequality, information trail, Internet of things, invention of writing, iterative process, Jaron Lanier, job automation, Johannes Kepler, John Markoff, John von Neumann, Kevin Kelly, knowledge worker, loose coupling, microbiome, Moneyball by Michael Lewis explains big data, natural language processing, Network effects, Norbert Wiener, pattern recognition, Peter Singer: altruism, phenotype, planetary scale, Ray Kurzweil, recommendation engine, Republic of Letters, RFID, Richard Thaler, Rory Sutherland, Satyajit Das, Search for Extraterrestrial Intelligence, self-driving car, sharing economy, Silicon Valley, Skype, smart contracts, social intelligence, speech recognition, statistical model, stem cell, Stephen Hawking, Steve Jobs, Steven Pinker, Stewart Brand, strong AI, Stuxnet, superintelligent machines, supervolcano, the scientific method, The Wisdom of Crowds, theory of mind, Thorstein Veblen, too big to fail, Turing machine, Turing test, Von Neumann architecture, Watson beat the top human players on Jeopardy!, Y2K

KAUFFMAN Pioneer of biocomplexity research; affiliate, Institute for Systems Biology, Seattle; author, Reinventing the Sacred: A New View of Science, Reason, and Religion The advent of quantum biology, light-harvesting molecules, bird navigation, perhaps smell, suggests that sticking to classical physics in biology may turn out to be simply stubborn. Now Turing Machines are discrete state (0,1), discrete time (T, T+1) subsets of classical physics. We all know they, like Shannon information, are merely syntactic. Wonderful mathematical results such as Gregory Chaitin’s omega—the probability that a program will halt, which is totally non-computable and nonalgorithmic—tell us that the human mind, as Roger Penrose also argued, cannot be merely algorithmic.


Commodity Trading Advisors: Risk, Performance Analysis, and Selection by Greg N. Gregoriou, Vassilios Karavas, François-Serge Lhabitant, Fabrice Douglas Rouah

Asian financial crisis, asset allocation, backtesting, buy and hold, capital asset pricing model, collateralized debt obligation, commodity trading advisor, compound rate of return, constrained optimization, corporate governance, correlation coefficient, Credit Default Swap, credit default swaps / collateralized debt obligations, discrete time, distributed generation, diversification, diversified portfolio, dividend-yielding stocks, fixed income, high net worth, implied volatility, index arbitrage, index fund, interest rate swap, iterative process, linear programming, London Interbank Offered Rate, Long Term Capital Management, market fundamentalism, merger arbitrage, Mexican peso crisis / tequila crisis, p-value, Pareto efficiency, Performance of Mutual Funds in the Period, Ponzi scheme, quantitative trading / quantitative finance, random walk, risk free rate, risk-adjusted returns, risk/return, selection bias, Sharpe ratio, short selling, stochastic process, survivorship bias, systematic trading, tail risk, technology bubble, transaction costs, value at risk, zero-sum game

The return data were converted to log changes,1 so they can be interpreted as percentage changes in continuous time. The mean returns presented in Table 3.1 show CTA returns are higher than those of public or private returns. This result is consistent with those formula used was rit = ln (1 + dit /100) × 100, where, dit is the discrete time return. The adjustment factor of 100 is used since the data are measured as percentages. 1The 33 Performance of Managed Futures in previous literature. The conventional wisdom as to why CTAs have higher returns is that they incur lower costs. However, CTA returns may be higher because of selectivity or reporting biases.


pages: 759 words: 166,687

Between Human and Machine: Feedback, Control, and Computing Before Cybernetics by David A. Mindell

Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Charles Lindbergh, Claude Shannon: information theory, Computer Numeric Control, discrete time, Frederick Winslow Taylor, From Mathematics to the Technologies of Life and Death, James Watt: steam engine, John von Neumann, Menlo Park, Norbert Wiener, Paul Samuelson, Ronald Reagan, Silicon Valley, Spread Networks laid a new fibre optics cable between New York and Chicago, telerobotics, Turing machine

George Stibitz noted that “people with mechanical experience think all electronic devices full of troubles, and correspondingly reverse opinions [were held] by the others.” 56 In a follow-up memo on the meeting George Stibitz clarified his own thinking on the subject. For him, the important distinction was not between mechanical and electronic but between analog and numerical, as well as between continuous and discrete time. The key characteristic of numerical machines, Stibitz added, was that analog machines shared the same dynamics as the problems they represented, whereas digital computers did not. Indeed one advantage of numerical techniques was that they decoupled the structure of the computer from that of the calculation.


Machine Learning Design Patterns: Solutions to Common Challenges in Data Preparation, Model Building, and MLOps by Valliappa Lakshmanan, Sara Robinson, Michael Munn

A Pattern Language, Airbnb, algorithmic trading, automated trading system, business intelligence, business process, combinatorial explosion, computer vision, continuous integration, Covid-19, COVID-19, DevOps, discrete time, en.wikipedia.org, iterative process, Kubernetes, microservices, mobile money, natural language processing, Netflix Prize, optical character recognition, pattern recognition, performance metric, recommendation engine, ride hailing / ride sharing, selection bias, self-driving car, sentiment analysis, speech recognition, statistical model, the payments system, web application

The updated model is deployed as a replacement only if it outperforms the previous model with respect to a test set of current data. Figure 5-6. Continuous evaluation provides model evaluation each day as new data is collected. Periodic retraining and model comparison provides evaluation at discrete time points. So how often should you schedule retraining? The timeline for retraining will depend on the business use case, prevalence of new data, and the cost (in time and money) of executing the retraining pipeline. Sometimes, the time horizon of the model naturally determines when to schedule retraining jobs.


Principles of Protocol Design by Robin Sharp

accounting loophole / creative accounting, business process, discrete time, fault tolerance, finite state, functional programming, Gödel, Escher, Bach, information retrieval, loose coupling, MITM: man-in-the-middle, packet switching, RFC: Request For Comment, stochastic process

After a number of attempts, the message will presumably get through, unless of course the system is overloaded or one of the senders has a defect and never stops transmitting3 . Contention protocols offer a form of what is known as statistical multiplexing, where the capacity of the multiplexed service is divided out among the senders in a non-deterministic manner. Their analysis (see, for example, Chapter 4 in [11]) is usually based on the theory of discrete-time Markov chains, which we shall not consider in detail here. In the case of unrestricted contention protocols, this analysis yields the intuitively obvious result that unless the generated traffic (number of new messages generated per unit time) is very moderate, then unrestricted contention is not a very effective method, leading to many collisions and long delays before a given message in fact gets through.


pages: 543 words: 147,357

Them And Us: Politics, Greed And Inequality - Why We Need A Fair Society by Will Hutton

Andrei Shleifer, asset-backed security, bank run, banking crisis, Bear Stearns, Benoit Mandelbrot, Berlin Wall, Bernie Madoff, Big bang: deregulation of the City of London, Blythe Masters, Boris Johnson, Bretton Woods, business cycle, capital controls, carbon footprint, Carmen Reinhart, Cass Sunstein, centre right, choice architecture, cloud computing, collective bargaining, conceptual framework, Corn Laws, corporate governance, creative destruction, credit crunch, Credit Default Swap, debt deflation, decarbonisation, Deng Xiaoping, discovery of DNA, discovery of the americas, discrete time, disinformation, diversification, double helix, Edward Glaeser, financial deregulation, financial innovation, financial intermediation, first-past-the-post, floating exchange rates, Francis Fukuyama: the end of history, Frank Levy and Richard Murnane: The New Division of Labor, full employment, George Akerlof, Gini coefficient, global supply chain, Growth in a Time of Debt, Hyman Minsky, I think there is a world market for maybe five computers, income inequality, inflation targeting, interest rate swap, invisible hand, Isaac Newton, James Dyson, James Watt: steam engine, joint-stock company, Joseph Schumpeter, Kenneth Rogoff, knowledge economy, knowledge worker, labour market flexibility, liberal capitalism, light touch regulation, Long Term Capital Management, long term incentive plan, Louis Pasteur, low cost airline, low-wage service sector, mandelbrot fractal, margin call, market fundamentalism, Martin Wolf, mass immigration, means of production, Mikhail Gorbachev, millennium bug, Money creation, money market fund, moral hazard, moral panic, mortgage debt, Myron Scholes, Neil Kinnock, new economy, Northern Rock, offshore financial centre, open economy, Plutocrats, plutocrats, price discrimination, private sector deleveraging, purchasing power parity, quantitative easing, race to the bottom, railway mania, random walk, rent-seeking, reserve currency, Richard Thaler, Right to Buy, rising living standards, Robert Shiller, Robert Shiller, Ronald Reagan, Rory Sutherland, Satyajit Das, Savings and loan crisis, shareholder value, short selling, Silicon Valley, Skype, South Sea Bubble, Steve Jobs, tail risk, The Market for Lemons, the market place, The Myth of the Rational Market, the payments system, the scientific method, The Wealth of Nations by Adam Smith, too big to fail, unpaid internship, value at risk, Vilfredo Pareto, Washington Consensus, wealth creators, working poor, zero-sum game, éminence grise

There were numerous similar stories in both New York and London. Finance is also extraordinarily short term. Executives expect top pay for that year’s outcome, even though it may take years to see whether the deals they struck have truly worked or the profits are anything more than transient. Next year is another discrete time period; profits and thus bonuses are decided year by year. This approach has been cemented by the ‘mark-to-market’ accounting convention in which asset values and profits have to be recognised each year, tracking market movements. As economist John Kay puts it, the trader or dealer ‘not only eats what he kills but also takes credit for the expected cull as soon as the hunters’ guns are primed’.6 Year One’s profits may turn to dust in Year Two, but by then who cares?


pages: 651 words: 180,162

Antifragile: Things That Gain From Disorder by Nassim Nicholas Taleb

Air France Flight 447, Andrei Shleifer, banking crisis, Benoit Mandelbrot, Berlin Wall, Black Swan, business cycle, caloric restriction, caloric restriction, Chuck Templeton: OpenTable:, commoditize, creative destruction, credit crunch, Daniel Kahneman / Amos Tversky, David Ricardo: comparative advantage, discrete time, double entry bookkeeping, Emanuel Derman, epigenetics, financial independence, Flash crash, Gary Taubes, George Santayana, Gini coefficient, Henri Poincaré, high net worth, hygiene hypothesis, Ignaz Semmelweis: hand washing, informal economy, invention of the wheel, invisible hand, Isaac Newton, James Hargreaves, Jane Jacobs, joint-stock company, joint-stock limited liability company, Joseph Schumpeter, Kenneth Arrow, knowledge economy, Lao Tzu, Long Term Capital Management, loss aversion, Louis Pasteur, mandelbrot fractal, Marc Andreessen, meta-analysis, microbiome, money market fund, moral hazard, mouse model, Myron Scholes, Norbert Wiener, pattern recognition, Paul Samuelson, placebo effect, Ponzi scheme, Post-Keynesian economics, principal–agent problem, purchasing power parity, quantitative trading / quantitative finance, Ralph Nader, random walk, Ray Kurzweil, rent control, Republic of Letters, Ronald Reagan, Rory Sutherland, selection bias, Silicon Valley, six sigma, spinning jenny, statistical model, Steve Jobs, Steven Pinker, Stewart Brand, stochastic process, stochastic volatility, tail risk, Thales and the olive presses, Thales of Miletus, The Great Moderation, the new new thing, The Wealth of Nations by Adam Smith, Thomas Bayes, Thomas Malthus, too big to fail, transaction costs, urban planning, Vilfredo Pareto, Yogi Berra, Zipf's Law

Example of detection and mapping of convexity bias (ωA), from author’s doctoral thesis: The method is to find what needs dynamic hedging and dynamic revisions. Among the members of the class of instruments considered that are not options stricto-sensu but require dynamic hedging can be rapidly mentioned a broad class of convex instruments: (1) Low coupon long dated bonds. Assume a discrete time framework. Take B(r,T,C) the bond maturing period T, paying a coupon C where rt = ∫rs ds. We have the convexity д2B/дr2 increasing with T and decreasing with C. (2) Contracts where the financing is extremely correlated with the price of the Future. (3) Baskets with a geometric feature in its computation. (4) A largely neglected class of assets is the “quanto-defined” contracts (in which the payoff is not in the native currency of the contract), such as the Japanese NIKEI Future where the payoff is in U.S. currency.


The Concepts and Practice of Mathematical Finance by Mark S. Joshi

Black-Scholes formula, Brownian motion, correlation coefficient, Credit Default Swap, delta neutral, discrete time, Emanuel Derman, fixed income, implied volatility, incomplete markets, interest rate derivative, interest rate swap, London Interbank Offered Rate, martingale, millennium bug, quantitative trading / quantitative finance, risk free rate, short selling, stochastic process, stochastic volatility, the market place, time value of money, transaction costs, value at risk, volatility smile, yield curve, zero-coupon bond

We then show how the BlackScholes equation can be reduced to the heat equation. This yields a derivation of the Black-Scholes formula. In Chapter 6, we step up another mathematical gear and this is the most mathematically demanding chapter. We introduce the concept of a martingale in both continuous and discrete time, and use martingales to examine the concept of riskneutral pricing. We commence by showing that option prices determine synthetic probabilities in the context of a single time horizon model. We then move on to study discrete pricing in martingale terms. Having motivated the definitions using the discrete case, we move on to the continuous case, and show how martingales can be used to develop arbitrage-free prices in the continuous framework.


pages: 637 words: 199,158

The Tragedy of Great Power Politics by John J. Mearsheimer

active measures, Berlin Wall, Bretton Woods, British Empire, colonial rule, continuation of politics by other means, deindustrialization, discrete time, disinformation, Dissolution of the Soviet Union, Francis Fukuyama: the end of history, illegal immigration, long peace, Mikhail Gorbachev, Monroe Doctrine, mutually assured destruction, oil shock, Pareto efficiency, RAND corporation, Ronald Reagan, Simon Kuznets, South China Sea, The Wealth of Nations by Adam Smith, Thomas L Friedman, Yom Kippur War

First, I examine the foreign policy behavior of the five dominant great powers of the past 150 years: Japan from the time of the Meiji Restoration in 1868 until the country’s defeat in World War II; Germany from the coming to power of Otto von Bismarck in 1862 until Adolf Hitler’s final defeat in 1945; the Soviet Union from its inception in 1917 until its collapse in 1991; Great Britain/the United Kingdom from 1792 until 1945; and the United States from 1800 to 1990.2 I choose to examine wide swaths of each state’s history rather than more discrete time periods because doing so helps show that particular acts of aggression were not instances of aberrant behavior caused by domestic politics, but, as offensive realism would predict, part of a broader pattern of aggressive behavior. Japan, Germany, and the Soviet Union are straightforward cases that provide strong support for my theory.


The Data Warehouse Toolkit: The Definitive Guide to Dimensional Modeling by Ralph Kimball, Margy Ross

active measures, Albert Einstein, business intelligence, business process, call centre, cloud computing, data acquisition, discrete time, inventory management, iterative process, job automation, knowledge worker, performance metric, platform as a service, side project, zero-sum game

Because the date dimension is likely the most frequently constrained dimension in a schema, it should be kept as small and manageable as possible. If you want to filter or roll up time periods based on summarized day part groupings, such as activity during 15-minute intervals, hours, shifts, lunch hour, or prime time, time-of-day would be treated as a full-fledged dimension table with one row per discrete time period, such as one row per minute within a 24-hour period resulting in a dimension with 1,440 rows. If there's no need to roll up or filter on time-of-day groupings, time-of-day should be handled as a simple date/time fact in the fact table. By the way, business users are often more interested in time lags, such as the transaction's duration, rather than discreet start and stop times.


Data Mining: Concepts and Techniques: Concepts and Techniques by Jiawei Han, Micheline Kamber, Jian Pei

backpropagation, bioinformatics, business intelligence, business process, Claude Shannon: information theory, cloud computing, computer vision, correlation coefficient, cyber-physical system, database schema, discrete time, disinformation, distributed generation, finite state, information retrieval, iterative process, knowledge worker, linked data, natural language processing, Netflix Prize, Occam's razor, pattern recognition, performance metric, phenotype, random walk, recommendation engine, RFID, semantic web, sentiment analysis, speech recognition, statistical model, stochastic process, supply-chain management, text mining, thinkpad, Thomas Bayes, web application

Pattern analysis is useful in the analysis of spatiotemporal data, time-series data, image data, video data, and multimedia data. An area of spatiotemporal data analysis is the discovery of colocation patterns. These, for example, can help determine if a certain disease is geographically colocated with certain objects like a well, a hospital, or a river. In time-series data analysis, researchers have discretized time-series values into multiple intervals (or levels) so that tiny fluctuations and value differences can be ignored. The data can then be summarized into sequential patterns, which can be indexed to facilitate similarity search or comparative analysis. In image analysis and pattern recognition, researchers have also identified frequently occurring visual fragments as “visual words,” which can be used for effective clustering, classification, and comparative analysis.


The Art of Computer Programming: Fundamental Algorithms by Donald E. Knuth

discrete time, distributed generation, Donald Knuth, fear of failure, Fermat's Last Theorem, G4S, Gerard Salton, Isaac Newton, Jacquard loom, Johannes Kepler, John von Neumann, linear programming, linked data, Menlo Park, probability theory / Blaise Pascal / Pierre de Fermat, sorting algorithm, stochastic process, Turing machine

By contrast, a "continuous simulation" would be simulation of activities that are under continuous changes, such as traffic moving on a highway, spaceships traveling to other planets, etc. Continuous simulation can often be satisfactorily approximated by discrete simulation with very small time intervals between steps; however, in such a case we usually have "synchronous" discrete simulation, in which many parts of the system are slightly altered at each discrete time interval, and such an application generally calls for a somewhat different type of program organization than the kind considered here. The program developed below simulates the elevator system in the Mathe- Mathematics building of the California Institute of Technology. The results of such a simulation will perhaps be of use only to people who make reasonably frequent visits to Caltech; and even for them, it may be simpler just to try using the elevator several times instead of writing a computer program.


pages: 931 words: 79,142

Concepts, Techniques, and Models of Computer Programming by Peter Van-Roy, Seif Haridi

computer age, Debian, discrete time, Donald Knuth, Eratosthenes, fault tolerance, functional programming, G4S, general-purpose programming language, George Santayana, John von Neumann, Lao Tzu, Menlo Park, natural language processing, NP-complete, Paul Graham, premature optimization, sorting algorithm, Therac-25, Turing complete, Turing machine, type inference

At each step, each logic gate reads its input wires, calculates the result, and puts it on the output wires. The steps are cadenced by a circuit called a clock. Most current digital electronic technology is synchronous. Our simulator will be synchronous as well. How do we model signals on a wire and circuits that read these signals? In a synchronous circuit, a signal varies only in discrete time steps. So we can model a signal as a stream of 0s and 1s. A logic gate is then simply a stream object: a recursive procedure, running in its own thread, that reads input streams and calculates output streams. A clock is a recursive procedure that produces an initial stream at a fixed rate. 4.3.5.1 Combinational logic Let us first see how to build simple logic gates.