performance metric

64 results back to index


The Data Warehouse Toolkit: The Definitive Guide to Dimensional Modeling by Ralph Kimball, Margy Ross

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, business intelligence, business process, call centre, cloud computing, data acquisition, discrete time, inventory management, iterative process, job automation, knowledge worker, performance metric, platform as a service, side project, supply-chain management

Multiple study groups can be defined and derivative study groups can be created with intersections, unions, and set differences. Chapter 8 Customer Relationship Management, p 249 Aggregated Facts as Dimension Attributes Business users are often interested in constraining the customer dimension based on aggregated performance metrics, such as filtering on all customers who spent over a certain dollar amount during last year or perhaps over the customer’s lifetime. Selected aggregated facts can be placed in a dimension as targets for constraining and as row labels for reporting. The metrics are often presented as banded ranges in the dimension table. Dimension attributes representing aggregated performance metrics add burden to the ETL processing, but ease the analytic burden in the BI layer. Chapter 8 Customer Relationship Management, p 239 Dynamic Value Bands A dynamic value banding report is organized as a series of report row headers that define a progressive set of varying-sized ranges of a target numeric fact.

These en masse changes are prime candidates because business users often want the ability to analyze performance metrics using either the pre- or post-hierarchy reorganization for a period of time. With type 3 changes, the prior column is labeled to distinctly represent the prechanged grouping, such as 2012 department or premerger department. These column names provide clarity, but there may be unwanted ripples in the BI layer. Finally, if the type 3 attribute represents a hierarchical rollup level within the dimension, then as discussed with type 1, the type 3 update and additional column would likely cause OLAP cubes to be reprocessed. 156 Chapter 5 Multiple Type 3 Attributes If a dimension attribute changes with a predictable rhythm, sometimes the business wants to summarize performance metrics based on any of the historic attribute values.

The bulk of the document centers on the business processes; for each process, describe why business users want to analyze the process’s performance metrics, what capabilities they want, their current limitations, and potential benefits or impact. Commentary about the feasibility of tackling each process is also important. As described in Chapter 4 and illustrated in Figure 4-11, the processes are sometimes unveiled in an opportunity/stakeholder matrix to convey the impact across the organization. In this case, the rows of the opportunity matrix identify business processes, just like a bus matrix. However, in the opportunity matrix, the columns identify the organizational groups or functions. Surprisingly, this matrix is usually quite dense because many groups want access to the same core performance metrics. Prioritizing Requirements The consolidated findings document serves as the basis for presentations back to senior management and other requirements participants.

 

pages: 263 words: 75,455

Quantitative Value: A Practitioner's Guide to Automating Intelligent Investment and Eliminating Behavioral Errors by Wesley R. Gray, Tobias E. Carlisle

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, Andrei Shleifer, asset allocation, Atul Gawande, backtesting, Black Swan, capital asset pricing model, Checklist Manifesto, cognitive bias, compound rate of return, corporate governance, correlation coefficient, credit crunch, Daniel Kahneman / Amos Tversky, discounted cash flows, Eugene Fama: efficient market hypothesis, forensic accounting, hindsight bias, Louis Bachelier, p-value, passive investing, performance metric, quantitative hedge fund, random walk, Richard Thaler, risk-adjusted returns, Robert Shiller, Robert Shiller, shareholder value, Sharpe ratio, short selling, statistical model, systematic trading, The Myth of the Rational Market, time value of money, transaction costs

When we examine the price ratios on a factor-adjusted basis using CAPM alpha, we again find that EBIT enterprise multiple is a top-performing metric, showing statistically and economically significant alpha of 5.23 percent for the top decile stocks. Here, the alternative EBITDA enterprise yield, earnings yield, and gross profits yield also perform well. BM and the free cash flow yield show smaller alphas than the other metrics. The EBIT enterprise multiple shines on a risk-adjusted basis using the Sharpe and Sortino ratios. The EBIT enterprise multiple shows a Sharpe ratio, which calculates risk-to-reward by examining excess return against volatility, of 0.58. When we examine the metric's risk/reward ratio using the Sortino ratio, which ignores upside volatility, and measures only excess return against downside volatility, we again find the augmented enterprise multiple to be the best-performed metric, with a Sortino ratio of 0.89.

Figure 1.1 sets out a brief graphical overview of the performance of the cheapest stocks according to common fundamental price ratios, such as the price-to-earnings (P/E) ratio, the price-to-book (P/B) ratio, and the EBITDA enterprise multiple (total enterprise value divided by earnings before interest, taxes, depreciation, and amortization, or TEV/EBITDA). FIGURE 1.1 Cumulative Returns to Common Price Ratios As Figure 1.1 illustrates, value investing according to simple fundamental price ratios has cumulatively beaten the S&P 500 over almost 50 years. Table 1.1 shows some additional performance metrics for the price ratios. The numbers illustrate that value strategies have been very successful (Chapter 7 has a detailed discussion of our method of our investment simulation procedures). TABLE 1.1 Long-Term Performance of Common Price Ratios (1964 to 2011) The counterargument to the empirical outperformance of value stocks is that these stocks are inherently more risky. In this instance, risk is defined as the additional volatility of the value stocks.

To this end, we focus our quantitative metrics on long-term averages for a set of simple measures. We have chosen eight years as our “long term” for two reasons: First, eight years likely captures a boom-and-bust cycle for the typical stock, and, second, there are sufficient stocks with eight years of historical data that we can identify a sufficiently large universe of stocks.9 We analyze three long-term, high-return operating performance metrics and rank these variables against the entire universe of stocks: long-term free cash flow on assets, long-term geometric return on assets, and long-term geometric return on capital, discussed next. The first measure is long-term free cash flow on assets (CFOA), defined as the sum of eight years of free cash flow divided by total assets. The measure can be expressed more formally as follows: CFOA = Sum (Eight Years Free Cash Flow) / Total Assets We define free cash flow as net income + depreciation and amortization − changes in working capital − capital expenditures.

 

pages: 372 words: 67,140

Jenkins Continuous Integration Cookbook by Alan Berg

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

anti-pattern, continuous integration, Debian, en.wikipedia.org, Firefox, job automation, performance metric, revision control, web application, x509 certificate

Consider storing User-Agents and other browser headers in a textfile, and then picking the values up for HTTP requests through the CSV Data Set Config element. This is useful if resources returned to your web browser, such as JavaScript or images, depend on the User-Agents. JMeter can then loop through the User-Agents, asserting that the resources exist. See also Reporting JMeter performance metrics Functional testing using JMeter assertions Reporting JMeter performance metrics In this recipe, you will be shown how to configure Jenkins to run a JMeter test plan, and then collect and report the results. The passing of variables from an Ant script to JMeter will also be explained. Getting ready It is assumed that you have run through the last recipe, Creating JMeter test plans. You will also need to install the Jenkins performance plugin (https://wiki.jenkins-ci.org/display/JENKINS/Performance+Plugin).

See also Looking for "smelly" code through code coverage Activating more PMD rulesets Interpreting JavaNCSS Chapter 6. Testing Remotely In this chapter, we will cover the following recipes: Deploying a WAR file from Jenkins to Tomcat Creating multiple Jenkins nodes Testing with Fitnesse Activating Fitnesse HtmlUnit Fixtures Running Selenium IDE tests Triggering failsafe integration tests with Selenium Webdriver Creating JMeter test plans Reporting JMeter performance metrics Functional testing using JMeter assertions Enabling Sakai web services Writing test plans with SoapUI Reporting SoapUI test results Introduction By the end of this chapter, you will have ran performance and functional tests against web applications and web services. Two typical setup recipes are included. The first is the deployment of a war file through Jenkins to an application server.

This allows JMeter to fail Jenkins builds based on a range of JMeter tests. This approach is especially important when starting from an HTML mockup of a web application, whose underlying code is changing rapidly. The test plan logs in and out of your local instance of Jenkins, checking size, duration, and text found in the login response. Getting ready We assume that you have already performed the Creating JMeter test plans and Reporting JMeter performance metrics recipes. The recipe requires the creation of a user tester1 in Jenkins. Feel free to change the username and password. Remember to delete the test user once it is no longer needed. How to do it... Create a user in Jenkins named tester1 with password testtest. Run JMeter. In the Test Plan element, change Name to LoginLogoutPlan, and add the following details for User Defined Variables:Name: USER; Value:tester1 Name: PASS; Value:testtest Right-click on Test Plan, then select Add | Config Element | HTTP cookie Manager.

 

pages: 597 words: 119,204

Website Optimization by Andrew B. King

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

AltaVista, bounce rate, don't be evil, en.wikipedia.org, Firefox, In Cold Blood by Truman Capote, information retrieval, iterative process, medical malpractice, Network effects, performance metric, search engine result page, second-price auction, second-price sealed-bid, semantic web, Silicon Valley, slashdot, social graph, Steve Jobs, web application

You can then retrieve revenue information about your conversions by running a report in the AdWords interface that opts to include value information for conversion columns. Tracking and Metrics You should track the success of all PPC elements through website analytics and conversion tracking. Google offers a free analytics program called Google Analytics. With it you can track multiple campaigns and get separate data for organic and paid listings. Whatever tracking program you use, you have to be careful to keep track of performance metrics correctly. The first step in optimizing a PPC campaign is to use appropriate metrics. Profitable campaigns with equally valued conversions might be optimized to: Reduce the CPC given the same (or greater) click volume and conversion rates. Increase the CTR given the same (or a greater) number of impressions and the same (or better) conversion rates. Increase conversion rates given the same (or a greater) number of clicks.

ComScore, http://www.comscore.com/request/cookie_deletion_white_paper.pdf (accessed February 5, 2008). According to the study, "Approximately 31 percent of U.S. computer users clear their first-party cookies in a month " Under these conditions, a server-centric measurement would overestimate unique visitors by 150%. [166] PathLoss is a metric developed by Paul Holstein of CableOrganizer.com. Web Performance Metrics At first glance, measuring the speed of a web page seems straightforward. Start a timer. Load up the page. Click Stop when the web page is "ready." Write down the time. For users, however, "ready" varies across different browsers on different connection speeds (dial-up, DSL, cable, LAN) at different locations (Washington, DC, versus Mountain View, California, versus Bangalore, India) at different times of the day (peak versus off-peak times) and from different browse paths (fresh from search results or accessed from a home page).

Tip If you have a machine dedicated to performance analysis, use about:blank as your home page. IBM Page Detailer IBM Page Detailer is a Windows tool that sits quietly in the background as you browse. It captures snapshots of how objects are loading on the page behind the scenes. Download it from http://www.alphaworks.ibm.com/tech/pagedetailer/download. IBM Page Detailer captures three basic performance metrics: load time, bytes, and items. These correlate to the Document Complete, kilobytes received, and number of requests metrics we are tracking. We recommend capturing three to five page loads and averaging the metrics to ensure that no anomalies impacted performance in the data, such as a larger ad. It is important, however, to note the occurrence and work to mitigate such anomalies. Table 10-2 shows our averaged results.

 

pages: 502 words: 107,510

Natural Language Annotation for Machine Learning by James Pustejovsky, Amber Stubbs

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Amazon Mechanical Turk, bioinformatics, cloud computing, computer vision, crowdsourcing, easy for humans, difficult for computers, finite state, game design, information retrieval, iterative process, natural language processing, pattern recognition, performance metric, sentiment analysis, social web, speech recognition, statistical model, text mining

Again, using some of the features that are identified in Natural Language Processing with Python, we have:[2] F1: last_letter = “a” F2: last_letter = “k” F3: last_letter = “f” F4: last_letter = “r” F5: last_letter = “y” F6: last_2_letters = “yn” Choose a learning algorithm to infer the target function from the experience you provide it with. We will start with the decision tree method. Evaluate the results according to the performance metric you have chosen. We will use accuracy over the resultant classifications as a performance metric. But, now, where do we start? That is, which feature do we use to start building our tree? When using a decision tree to partition your data, this is one of the most difficult questions to answer. Fortunately, there is a very nice way to assess the impact of choosing one feature over another. It is called information gain and is based on the notion of entropy from information theory.

Choose how to represent the target function. We will assume that target function is represented as the MAP of the Bayesian classifier over the features. Choose a learning algorithm to infer the target function from the experience you provide it with. This is tied to the way we chose to represent the function, namely: Evaluate the results according to the performance metric you have chosen. We will use accuracy over the resultant classifications as a performance metric. Sentiment classification Now let’s look at some classification tasks where different feature sets resulting from richer annotation have proved to be helpful for improving results. We begin with sentiment or opinion classification of texts. This is really two classification tasks: first, distinguishing fact from opinion in language; and second, if a text is an opinion, determining the sentiment conveyed by the opinion holder, and what object it is directed toward.

We will learn when to use each of these classes, as well as which algorithms are most appropriate for each feature type. In particular, we will answer the following question: when does annotation actually help in a learning algorithm? Defining Our Learning Task To develop an algorithm, we need to have a precise representation of what we are trying to learn. We’ll start with Tom Mitchell’s [1] definition of a learning task: Learning involves improving on a task, T, with respect to a performance metric, P, based on experience, E. Given this statement of the problem (inspired by Simon’s concise phrasing shown earlier), Mitchell then discusses the five steps involved in the design of a learning system. Consider what the role of a specification and the associated annotated data will be for each of the following steps for designing a learning system: Choose the “training experience.” For our purposes, this is the corpus that you just built.

 

pages: 354 words: 26,550

High-Frequency Trading: A Practical Guide to Algorithmic Strategies and Trading Systems by Irene Aldridge

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

algorithmic trading, asset allocation, asset-backed security, automated trading system, backtesting, Black Swan, Brownian motion, business process, capital asset pricing model, centralized clearinghouse, collapse of Lehman Brothers, collateralized debt obligation, collective bargaining, diversification, equity premium, fault tolerance, financial intermediation, fixed income, high net worth, implied volatility, index arbitrage, interest rate swap, inventory management, law of one price, Long Term Capital Management, Louis Bachelier, margin call, market friction, market microstructure, martingale, New Journalism, p-value, paper trading, performance metric, profit motive, purchasing power parity, quantitative trading / quantitative finance, random walk, Renaissance Technologies, risk tolerance, risk-adjusted returns, risk/return, Sharpe ratio, short selling, Small Order Execution System, statistical arbitrage, statistical model, stochastic process, stochastic volatility, systematic trading, trade route, transaction costs, value at risk, yield curve

Kurtosis indicates whether the tails of the distribution are normal; high kurtosis signifies “fat tails,” a higher than normal probability of extreme positive or negative events. COMPARATIVE RATIOS While average return, standard deviation, and maximum drawdown present a picture of the performance of a particular trading strategy, the measures do not lend to an easy point comparison among two or more strategies. Several comparative performance metrics have been developed in an attempt to summarize mean, variance, and tail risk in a single number that can be used to compare different trading strategies. Table 5.1 summarizes the most popular point measures. The first generation of point performance measures were developed in the 1960s and include the Sharpe ratio, Jensen’s alpha, and the Treynor ratio. The Sharpe ratio is probably the most widely used measure in comparative performance evaluation; it incorporates three desirable metrics—average return, standard deviation, and the cost of capital.

VaR companion measure, the conditional VaR (CVaR), also known as expected loss (EL), measures the average value of return within the cut-off tail. Of course, the original VaR assumes normal distributions of returns, whereas the returns are known to be fat-tailed. To address this issue, a modified VaR (MVaR) measure was proposed by Gregoriou and Gueyie (2003) and takes into account deviations from normality. Gregoriou and Gueyie (2003) also suggest using MVaR in place of standard deviation in Sharpe ratio calculations. How do these performance metrics stack up against each other? It turns out that all metrics deliver comparable rankings of trading strategies. Evaluating Performance of High-Frequency Strategies 57 Eling and Schuhmacher (2007) compare hedge fund ranking performance of the 13 measures listed and conclude that the Sharpe ratio is an adequate measure for hedge fund performance. PERFORMANCE ATTRIBUTION Performance attribution analysis, often referred to as “benchmarking,” goes back to the arbitrage pricing theory of Ross (1977) and has been applied to trading strategy performance by Sharpe (1992) and Fung and Hsieh (1997), among others.

Methods for forecast comparisons include: r Mean squared error (MSE) r Mean absolute deviation (MAD) 221 Back-Testing Trading Models r Mean absolute percentage error (MAPE) r Distributional performance r Cumulative accuracy profiling If the value of a financial security is forecasted to be xF,t at some future time t and the realized value of the same security at time t is xR,t , the forecast error for the given forecast, εF,t , is computed as follows: ε F,t = xF,t − x R,t (15.2) The mean squared error (MSE) is then computed as the average of squared forecast errors over T estimation periods, analogously to volatility computation: MSE = T 1 2 ε T τ =1 F,τ (15.3) The mean absolute deviation (MAD) and the mean absolute percentage error (MAPE) also summarize properties of forecast errors: MAD = MAPE = T 1 |ε F,τ | T τ =1 T 1 ε F,τ x T R,τ (15.4) (15.5) τ =1 Naturally, the lower each of the three metrics (MSE, MAD, and MAPE), the better the forecasting performance of the trading system. The distributional evaluation of forecast performance also examines forecast errors ε F,t normalized by the realized value, x R,t . Unlike MSE, MAD, and MAPE metrics, however, the distributional performance metric seeks to establish whether the forecast errors are random. If the errors are indeed random, there exists no consistent bias in either of price direction ε F,t movement, and the distribution of normalized errors xR,t should fall on the uniform [0, 1] distribution. If the errors are nonrandom, the forecast can be improved. One test that can be used to determine whether the errors are random is a comparison of errors with the uniform distribution using the Kolmogorov-Smirnov statistic.

 

pages: 351 words: 123,876

Beautiful Testing: Leading Professionals Reveal How They Improve Software (Theory in Practice) by Adam Goucher, Tim Riley

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, barriers to entry, Black Swan, call centre, continuous integration, Debian, en.wikipedia.org, Firefox, Grace Hopper, index card, Isaac Newton, natural language processing, p-value, performance metric, revision control, six sigma, software as a service, software patent, the scientific method, Therac-25, Valgrind, web application

The performance test cases, however, were renamed “Performance Testing Checkpoints” and included the following (abbreviated here): 42 CHAPTER FOUR • Collect baseline system performance metrics and verify that each functional task included in the system usage model achieves performance requirements under a user load of 1 for each performance testing build in which the functional task has been implemented. — [Functional tasks listed, one per line] • Collect system performance metrics and verify that each functional task included in the system usage model achieves performance requirements under a user load of 10 for each performance testing build in which the functional task has been implemented. — [Functional tasks listed, one per line] • Collect system performance metrics and verify that the system usage model achieves performance requirements under the following loads to the degree that the usage model has been implemented in each performance testing build. — [Increasing loads from 100 users to 3,000 users, listed one per line] • Collect system performance metrics and verify that the system usage model achieves performance requirements for the duration of a 9-hour, 1,000-user stress test on performance testing builds that the lead developer, performance tester, and project manager deem appropriate.

Clearly frustrated, but calm, Harold told me that he’d been asked to establish the performance requirements that were going to appear in our contract to the client. Now understanding the intent, I suggested that Harold schedule a conference room for a few hours for us to discuss his task further. He agreed. As it turned out, it took more than one meeting for Harold to explain to me the client’s expectations, the story behind his task, and for me to explain to Harold why we didn’t want to be contractually obligated to performance metrics that were inherently ambiguous, what those ambiguities were, and what we could realistically measure that would be valuable. Finally, Harold and I took what were now several sheets of paper with the following bullets to Sandra, our project manager, to review: “System Performance Testing Requirements: • Performance testing will be conducted under a variety of loads and usage models, to be determined when system features and workflows are established

. — [Functional tasks listed, one per line] • Collect system performance metrics and verify that the system usage model achieves performance requirements under the following loads to the degree that the usage model has been implemented in each performance testing build. — [Increasing loads from 100 users to 3,000 users, listed one per line] • Collect system performance metrics and verify that the system usage model achieves performance requirements for the duration of a 9-hour, 1,000-user stress test on performance testing builds that the lead developer, performance tester, and project manager deem appropriate. The beauty here was that what we created was clear, easy to build a strategy around, and mapped directly to information that the client eventually requested in the final report. An added bonus was that from that point forward in the project, whenever someone challenged our approach to performance testing, one or more of the folks who were involved in the creation of the checkpoints always came to my defense—frequently before I even found out about the challenge!

 

pages: 304 words: 80,965

What They Do With Your Money: How the Financial System Fails Us, and How to Fix It by Stephen Davis, Jon Lukomnik, David Pitt-Watson

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Admiral Zheng, banking crisis, Basel III, Bernie Madoff, Black Swan, centralized clearinghouse, clean water, corporate governance, correlation does not imply causation, credit crunch, Credit Default Swap, crowdsourcing, David Brooks, Dissolution of the Soviet Union, diversification, diversified portfolio, en.wikipedia.org, financial innovation, financial intermediation, Flash crash, income inequality, index fund, invisible hand, London Whale, Long Term Capital Management, moral hazard, Northern Rock, passive investing, performance metric, Ponzi scheme, principal–agent problem, rent-seeking, Ronald Coase, shareholder value, Silicon Valley, South Sea Bubble, sovereign wealth fund, statistical model, Steve Jobs, the market place, The Wealth of Nations by Adam Smith, transaction costs, Upton Sinclair, value at risk, WikiLeaks

Elson commented, “Even the best corporate boards will fail to address executive compensation concerns unless they tackle the structural bias created by external peer group benchmarking metrics. … Boards should measure performance and determine compensation by focusing on internal metrics. For example, if customer satisfaction is deemed important to the company, then results of customer surveys should play into the compensation equation. Other internal performance metrics can include revenue growth, cash flow, and other measures of return.”58 In other words, boards should focus, as owners do, on what makes the business flourish. USE THE RIGHT METRICS As discussed earlier, 90 percent of large American companies measure the performance of their executive teams over a three-year period or less. About a quarter don’t have any long-term performance–based awards at all.59 Fewer than 25 percent incorporate the cost of capital into their executive compensation formulas, and only 13 percent consider innovation—such as new products, markets, or services, research and development, or intellectual property development—in determining compensation.60 You couldn’t design an incentive scheme better suited to keeping a CEO focused strictly on the short term if you tried.

., 254n2 BrightScope, 122 Brokers, fiduciary duty and, 256n23 Brooks, David, 167 Buffett, Warren, 45, 63, 64, 80, 150, 221 Business judgment rule, 78–79 Business school curriculum, 190–92 Buy and Hold Is Dead (Again) (Solow), 65 Buy and Hold Is Dead (Kee), 65 Buycott, 118 Cadbury, Adrian, 227 Call option, 93 CalPERS, 91, 110, 111–12, 208, 221, 241n37 CalSTRS, 208 Canada, pension funds in, 59, 111, 209 Capital Aberto (magazine), 117 Capital gains, taxation of, 92 Capital Institute, 59, 87 Capital losses, 92 Capitalism: agency, 33, 74–80 defined, 243n2 Eastern European countries’ transition to, 167 financial system and, 9 injecting ownership back into, 83–93 private ownership and, 62 reforming, 11–12 Carbon Disclosure Project, 89 Career paths, new economic thinking and, 189–90 CDC. See Collective pension plans CDFIs. See Community Development Financial Institutions (CDFIs) CDSs. See Credit default swaps (CDSs) CEM Benchmarking, 54 Central banks, 20, 213 Centre for Policy Studies, 105 CEOs: performance metrics, 68, 86–87 short-term mindset among, 67–68. See also Executive compensation Ceres, 120 CFA Institute, 121 Chabris, Christopher, 174 Charles Schwab, 29, 31 Cheating, regulations and, 144–45 Chinese Academy of Social Sciences, 167 Citadel, 29 Citicorp, 76 Citizen investors/savers, 19 charter for, 227–31 communication between funds and, 110–11 dependence on others to manage money, 5–6, 19, 20 goals of, 48, 49 government regulation to safeguard, 107–9 lack of accountability to, 5–7, 96, 99–106 technology and, 90–92 trading platforms that protect, 88–89 City Bank of Glasgow, 257n34 Civil society organizations (CSOs), 153 corporate accountability and, 119–23 scrutiny of funds by, 224 “Civil Stewardship Leagues,” 122 Clark, Gordon L., 101, 106 Classical economics, 159–61 Clegg, Nick, 9 Clinton, Bill, 68–69 Clinton, Hillary Rodham, 119 Coase, Ronald, 169–70, 243n2, 261n31 Cohen, Lauren, 102 Coles Myer, 82 Collective Defined Contribution (CDC), 266n28 Collective pension plans, 263n1, 266n28 duration of liabilities, 264n3 in Netherlands, 197, 199, 209, 264n6.

See also Retirement savings Pension Trustee Code of Conduct, 121 Pension trustees, 105–6, 108–9, 137–38, 140, 205, 207, 224–25, 229 People’s Pension, 202–11 cost of, 217 enrollment into, 208–9 feedback mechanisms, 207 fees, 204 governance and, 202–3, 205–6 investment interests of beneficiaries, 206–7 models for, 266n28 reform of financial institutions and, 226 transparency and, 203–4, 207–8 Performance: asset managers and, 48–50 defined, 149 encouraging through collective action, 57–58 executive compensation and, 68, 148–49 fees, 239n16 governance and, 100–104 institutional investors and incentives for, 112–13 investment management, 35–38 Performance metrics for executives, 68, 86–87 Perry Capital, 81 PFZW. See Stichting Pensioenfonds Zorg en Welzijn (PFZW) PGGM, 77, 111 Philippon, Thomas, 26–28, 220 Philosophy, Politics and Economics (PPE), 190 Pitman, Brian, 213 Pitt, William, 158 Pitt-Watson, David, 263n1, 264n4, 264–65n11, 266n28 Plender, John, 259n5 Political economy, 142, 152 Political institutions, 183–84 Portfolio management: ownership and, 246n36 pension fund, 208–9 PPE (Philosophy, Politics and Economics), 190 Premium, 22 Price of goods, 160 Principles for Responsible Investment.

 

pages: 353 words: 88,376

The Investopedia Guide to Wall Speak: The Terms You Need to Know to Talk Like Cramer, Think Like Soros, and Buy Like Buffett by Jack (edited By) Guinan

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, asset allocation, asset-backed security, Brownian motion, business process, capital asset pricing model, clean water, collateralized debt obligation, correlation coefficient, credit crunch, Credit Default Swap, credit default swaps / collateralized debt obligations, discounted cash flows, diversification, diversified portfolio, dividend-yielding stocks, equity premium, fixed income, implied volatility, index fund, interest rate swap, inventory management, London Interbank Offered Rate, margin call, market fundamentalism, mortgage debt, passive investing, performance metric, risk tolerance, risk-adjusted returns, risk/return, shareholder value, Sharpe ratio, short selling, statistical model, time value of money, transaction costs, yield curve, zero-coupon bond

Related Terms: • American Depositary Receipt—ADR • Correlation • Exchange-Traded Fund—ETF • Global Depositary Receipt—GDR • Index Multiple What Does Multiple Mean? A term that measures a particular aspect of a company’s financial well-being, determined by dividing one metric by another metric. The metric in the numerator is typically larger than the one in the denominator, because the top metric usually is supposed to be many times Multiple = Performance Metric “A” Performance Metric “B” larger than the bottom metric. It is calculated as follows: Investopedia explains Multiple As an example, the term “multiple” can be used to show how much investors are willing to pay per dollar of earnings, as computed by the P/E ratio. Suppose one is analyzing a stock with $2 of earnings per share (EPS) that is trading at $20; this stock has a P/E of 10. This means that investors are willing to pay a multiple of 10 times earnings for the stock.

For example, a European investor purchasing shares of an American company on a foreign exchange (using American dollars to do so) would be exposed to exchange-rate risk while holding that stock. To hedge that risk, the investor could purchase currency futures to lock in a specified exchange rate for the future stock sale and conversion back into the foreign currency. Related Terms: • Credit Derivative • Hedge • Stock Option • Forward Contract • Option Diluted Earnings per Share (Diluted EPS) What Does Diluted Earnings per Share (Diluted EPS) Mean? A performance metric used to gauge the quality of a company’s earnings per share (EPS) if all convertible securities were exercised. Convertible securities refer to all outstanding convertible preferred shares, convertible debentures, stock options (primarily employee-based), The Investopedia Guide to Wall Speak 75 and warrants. Unless the company has no additional potential shares outstanding (a relatively rare circumstance), the diluted EPS will always be lower than the simple EPS.

 

pages: 297 words: 91,141

Market Sense and Nonsense by Jack D. Schwager

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

asset allocation, Bernie Madoff, Brownian motion, collateralized debt obligation, commodity trading advisor, conceptual framework, correlation coefficient, Credit Default Swap, credit default swaps / collateralized debt obligations, Daniel Kahneman / Amos Tversky, diversification, diversified portfolio, fixed income, high net worth, implied volatility, index arbitrage, index fund, London Interbank Offered Rate, Long Term Capital Management, margin call, market bubble, market fundamentalism, merger arbitrage, pattern recognition, performance metric, pets.com, Ponzi scheme, quantitative trading / quantitative finance, random walk, risk tolerance, risk-adjusted returns, risk/return, Robert Shiller, Robert Shiller, Sharpe ratio, short selling, statistical arbitrage, statistical model, transaction costs, two-sided market, value at risk, yield curve

But the story does not end there. Figure 3.11 NAV Comparison: Three-Period Prior Best S&P Sector versus Prior Worst and Average Data source: S&P Dow Jones Indices. So far, the analysis has only considered returns and has shown that choosing the best past sector would have yielded slightly lower returns than an equal-allocation approach (that is, the average). Return, however, is an incomplete performance metric. Any meaningful performance comparison must also consider risk (a concept we will elaborate on in Chapter 4). We use two measures of risk here: 1. Standard deviation. The standard deviation is a volatility measure that indicates how spread out the data is—in this case, how broadly the returns vary. Roughly speaking, we would expect approximately 95 percent of the data points to fall within two standard deviations of the mean.

Based on performance, it would be difficult to justify choosing Manager E over Manager F, even for the most risk-tolerant investor. Figure 8.12 2DUC: Manager E versus Manager F Investment Misconceptions Investment Misconception 23: The average annual return is probably the single most important performance statistic. Reality: Return alone is a meaningless statistic because return can always be increased by increasing risk. The return/risk ratio should be the primary performance metric. Investment Misconception 24: For a risk-seeking investor considering two investment alternatives, an investment with expected lower return/risk but higher return may often be preferable to an equivalent-quality investment with the reverse characteristics. Reality: The higher return/risk alternative would still be preferable, even for risk-seeking investors, because by using leverage it can be translated into an equivalent return with lower risk (or higher return with equal risk).

However, pro forma results that only adjust for differences between current and past fees and commissions can be more representative than actual results. It is critical to differentiate between these two radically different applications of the same term: pro forma. 16. Return alone is a meaningless statistic because return can be increased by increasing risk. The return/risk ratio should be the primary performance metric. 17. Although the Sharpe ratio is by far the most widely used return/risk measure, return/risk measures based on downside risk come much closer to reflecting risk as it is perceived by most investors. 18. Conventional arithmetic-scale net asset value (NAV) charts provide a distorted picture, especially for longer-term track records that traverse a wide range of NAV levels. A log scale should be used for long-term NAV charts. 19.

 

pages: 49 words: 12,968

Industrial Internet by Jon Bruner

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

autonomous vehicles, barriers to entry, computer vision, data acquisition, demand response, en.wikipedia.org, factory automation, Google X / Alphabet X, industrial robot, Internet of things, job automation, loose coupling, natural language processing, performance metric, Silicon Valley, slashdot, smart grid, smart meter, statistical model, web application

Newer wind turbines use software that acts in real-time to squeeze a little more current out of each revolution, pitching the blades slightly as they rotate to compensate for the fact that gravity shortens them as they approach the top of their spin and lengthens them as they reach the bottom. Power producers use higher-level data analysis to inform longer-range capital strategies. The 150-foot-long blades on a wind turbine, for instance, chop at the air as they move through it, sending turbulence to the next row of turbines and reducing efficiency. By analyzing performance metrics from existing wind installations, planners can recommend new layouts that take into account common wind patterns and minimize interference. Automotive Google captured the public imagination when, in 2010, it announced that its autonomous cars had already driven 140,000 miles of winding California roads without incident. The idea of a car that drives itself was finally realized in a practical way by software that has strong links to the physical world around it: inbound, through computer vision software that takes in images and rangefinder data and builds an accurate model of the environment around the car; and outbound, through a full linkage to the car’s controls.

 

pages: 291 words: 77,596

Total Recall: How the E-Memory Revolution Will Change Everything by C. Gordon Bell, Jim Gemmell

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

airport security, Albert Einstein, book scanning, cloud computing, conceptual framework, full text search, information retrieval, invention of writing, inventory management, Isaac Newton, Menlo Park, optical character recognition, pattern recognition, performance metric, RAND corporation, RFID, semantic web, Silicon Valley, Skype, social web, statistical model, Stephen Hawking, Steve Ballmer, Ted Nelson, telepresence, Turing test, Vannevar Bush, web application

“Recognizing soldier activities in the field.” Proceedings of International IEEE Workshop on Wearable and Implantable Body Sensor Networks (BSN), Aachen, Germany, March 2007. Schlenoff, Craig, et al. “Overview of the First Advanced Technology Evaluations for ASSIST.” Proceedings of Performance Metrics for Intelligent Systems (PerMIS) 2006, IEEE Press, Gaithersburg, Maryland, August 2006. Stevers, Michelle Potts. “Utility Assessments of Soldier-Worn Sensor Systems for ASSIST.” Proceedings of the Performance Metrics for Intelligent Systems Workshop, 2006. Starner, Thad. “The Virtual Patrol: Capturing and Accessing Information for the Soldier in the Field.” Proceedings of the 3rd ACM Workshop on Continuous Archival and Retrieval of Personal Experiences, Santa Barbara, California, 2006. Glass Box: Cowley, Paula, Jereme Haack, Rik Littlefield, and Ernest Hampson.

 

pages: 231 words: 71,248

Shipping Greatness by Chris Vander Mey

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

don't be evil, en.wikipedia.org, fudge factor, Google Chrome, Google Hangouts, Gordon Gekko, Jeff Bezos, Kickstarter, Lean Startup, minimum viable product, performance metric, recommendation engine, Skype, slashdot, sorting algorithm, Steve Jobs, Superbowl ad, web application

Some rocket surgeon a while back came up with the notion that goals should be specific, measurable, attainable, reasonable, and time-based. This is a good, but not sufficiently specific, framework. I prefer the Great Delta Convention (described in Chapter 10). If you apply the Great Delta Convention to your goals, nobody will question them—they will almost be S.M.A.R.T. by definition (lacking only the “reasonable” part). Business Performance Business performance metrics tell you where your problems are and how you can improve your user’s experience. These metrics are frequently measured as ratios, such as conversion from when a user clicks the Buy button to when the checkout process is complete. Like goal metrics, it’s critical to measure the right aspects of your business. For example, if you want to build a great social product, you don’t need to measure friends—different segments of users have different numbers of friends.

Google Analytics provides A/B comparison tools that are incredibly powerful, but they’re just one kind of many tools you can use. Most major websites have testing frameworks that they use to roll out features incrementally and ensure that a new feature or experience has the intended effect. If it’s even remotely possible, try to build an experimentation framework in from the beginning (see Chapter 7’s discussion of launching for other benefits of experiments). Systems Performance Systems performance metrics measure the health of your product in real time. Metrics like these include 99.9% mean latency, total requests per second, simultaneous users, orders per second, and other time-based metrics. When these metrics go down substantially, something has gone wrong. A pager should go off. If you’re a very fancy person, you’ll want to look at your metrics through the lens of statistical process control (SPC).

 

pages: 98 words: 25,753

Ethics of Big Data: Balancing Risk and Innovation by Kord Davis, Doug Patterson

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

4chan, business process, corporate social responsibility, crowdsourcing, en.wikipedia.org, Mahatma Gandhi, Mark Zuckerberg, Netflix Prize, Occupy movement, performance metric, side project, smart grid, urban planning

The volume at which new data is being generated is staggering. We live in an age when the amount of data we expect to be generated in the world is measured in exabytes and zettabytes. By 2025, the forecast is that the Internet will exceed the brain capacity of everyone living on the entire planet. Additionally, the variety of sources and data types being generated expands as fast as new technology can be created. Performance metrics from in-car monitors, manufacturing floor yield measurements, all manner of healthcare devices, and the growing number of Smart Grid energy appliances all generate data. More importantly, they generate data at a rapid pace. The velocity of data generation, acquisition, processing, and output increases exponentially as the number of sources and increasingly wider variety of formats grows over time.

 

pages: 561 words: 114,843

Startup CEO: A Field Guide to Scaling Up Your Business, + Website by Matt Blumberg

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

airport security, Albert Einstein, bank run, Broken windows theory, crowdsourcing, deskilling, fear of failure, high batting average, high net worth, hiring and firing, Inbox Zero, James Hargreaves, Jeff Bezos, job satisfaction, Kickstarter, knowledge economy, knowledge worker, Lean Startup, Mark Zuckerberg, minimum viable product, pattern recognition, performance metric, pets.com, rolodex, shareholder value, Silicon Valley, Skype

This was less surprising about some aspects than others. For example, I wasn’t surprised that there was a high degree of convergence in the way people thought about the organization’s values since we had a strong values-driven culture that people were living every day, even if those values hadn’t been well articulated in the past. But it was a little surprising that we could effectively crowdsource a strategy statement and key performance metrics at a time when the business was at a fork in the road. Given this degree of alignment, our task as an executive team became less about picking concepts and more about picking words. We worked together to come up with a solid draft that took the best of what was submitted to us. We worked with a copywriter to make the statements flow well. Then we shared the results with the company and opened the floor for comments.

How and when is this investment going to pay itself back? What is the capital required to get there and what are your financing requirements from where your balance sheet sits today? The costs are easier to forecast, especially if you carefully articulated your resource requirements. As everybody in the startup world knows, ROI is trickier. You’re not leading an enterprise that has extremely detailed historical performance metrics to rely on in their forecasting. When Schick or Gillette introduces a new razor into the marketplace, they can very accurately forecast how much it’s going to cost them and what their return will be. If you’re creating a new product in a new marketplace, that isn’t the case. While monthly burn and revenue projections will inevitably change, capital expenditures can be more predictable, though you need to make sure you understand the cash flow mechanics of capital expenditure.

Second, those criteria have to be things that will remain in the control of the acquired company for the length of the earn-out; asking an entrepreneur to agree to an earn-out based on sales, for example, when your sales force will be doing all of the selling, doesn’t make sense. Finally, an earn-out can’t be too high a percentage of the deal. The preponderance will have to be cash and stock. Otherwise, the process of judging performance should be shared by both parties. In one of our largest deals at Return Path, each side appointed representatives who met quarterly to agree on performance metrics, adjustments, and so on. We also designated a third representative in advance who was available to adjudicate any disagreements. We never had to use him. Whatever mechanism you put in place, trust plays a huge role here. If it’s not there, this acquisition might not be a good idea. THE FLIP SIDE OF M&A: DIVESTITURE When Return Path turned six years old in 2005, we had gone from being a startup focused on our initial ECOA business to the world’s smallest conglomerate, with five lines of business: in addition to change of address, we were market leaders in email delivery assurance (a market we created), email–based market research (a tiny market when we started) and email list management and list rental (both huge markets when we founded the company).

 

pages: 556 words: 46,885

The World's First Railway System: Enterprise, Competition, and Regulation on the Railway Network in Victorian Britain by Mark Casson

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

banking crisis, barriers to entry, Beeching cuts, British Empire, combinatorial explosion, Corn Laws, corporate social responsibility, David Ricardo: comparative advantage, intermodal, iterative process, joint-stock company, joint-stock limited liability company, knowledge economy, linear programming, Network effects, New Urbanism, performance metric, railway mania, rent-seeking, strikebreaker, the market place, transaction costs

To this end, the counterfactual has been constructed on very conservative assumptions, which are elaborated below. The engineering assumptions are very conservative relative to actual railway practice, while the use of detailed land surveys and large-scale maps means that major infringements of local parks and amenities have been avoided. 1 . 4 . P E R F O R M A N C E M E T R I C S : D I S TA N C E A N D T I M E Two main performance metrics are used in this study: journey distance and journey time. The most obvious metric by which to compare the actual and counterfactual systems is by the route mileages between pairs of towns. This Introduction and Summary 7 metric is not quite so useful as it seems, however. For many types of traYc, including passengers, mail, troops, and perishable goods, it is the time taken by the journey that is important and not the distance per se.

In practice the counterfactual system, being smaller, would have been completed much earlier than the actual system, assuming that the pace of construction had been the same. Thus the average working life of the counterfactual system would have been longer—another advantage which is not formally included in the comparison. 3 . 4 . C O N S T RU C T I O N O F T H E C O UN T E R FAC T UA L : PERFORMANCE METRICS To compare the performance of the actual and counterfactual systems a set of 250 representative journeys was examined. Ten different types of journey were 64 The World’s First Railway System distinguished, and sub-samples of 25 journeys of each type were generated. Performance was measured for each type of journey, and an overall measure of performance, based on an arithmetic average, was constructed.

R. 367 Clitheroe as secondary natural hub 83 Tab 3.4 Clyde River 199 Clyde Valley 156 coal industry 1, 50 exports 5 see also regional coalfields coal traffic 53, 182–3, 270 coalfield railways 127, 167 Coalville 187 Coatbridge 157 Cobden, Richard 37 Cockermouth 219 Colchester 69, 107, 108 Coldstream 158, 159 Colebrook 198 Colonial Office, British 48 Combe Down Tunnel 144 commerce, industry and railways 308 Index Commercial Railway Scheme, London 152, 154 Commission on the Merits of the Broad and Narrow Gauge 228 Tab 6.2 company law 42–3 competing local feeders 204–7 competition adverse effects of 221 adversarial 316–19 concept applied to railways 258–60 Duopolistic on networks 492–4 and duplication of routes 94 and excess capacity 477–97 excessive 16–19 and fare reduction 261–2 individual/multiple linkages 266, 267 inter-town 323–4 and invasions by competing companies 268–9, 273 and invasions in joint venture schemes (large) 166–73 and invasions in joint venture schemes (small) 173–8 network effects 262–4 principle of 221 and territorial strategy 286–7 wastage/inefficiency 162, 166 compulsory purchase of land 30, 223, 288 concavity principle 72, 82 connectivity and networks 2–3 Connel Ferry 161 construction costs 16–17 consultant engineers see civil engineers; mechanical engineers contour principle 72 contractors 301–2 Conway River 136 cooperation between companies 324–6 core and peripheral areas, UK 85 Fig 3.8 Corn Laws, Repeal (1846) 37, 110 Cornwall 152 Cornwall Railway 141 corporate social responsibility 311–13 corridor trains 311 Cosham 147, 190 Cotswold Hills 110, 111, 114, 149 counterfactual map of the railway network East Midlands 90 Fig 3.10 North of England 92 Fig 3.12 South East England 90 Fig 3.10 Wales 91 Fig 3.11 West of England 91 Fig 3.11 counterfactual railway network 4–29, 58–104 bypass principle 80–2, 89 and cities 306 concavity principle 82 continuous linear trunk network with coastal constraints 74 Fig 3.2 503 continuous linear trunk network with no coastal constraints 73 Fig 3.1 contour principle 87, 88 Fig 3.9 core and periphery principle 82–6, 84 Tab 3.5, 85 Fig 3.8 coverage of cities, town and villages 62–3 cross-country linkages on the symmetric network 100 Fig 3.19 cross-country routes 274 cut-off principle 80, 81 Fig 3.7, 89 cut-off principle with traffic weighting 81 Fig 3.7 Darlington core hub 89 Derby core hub 89 frequency of service 65–6 Gloucester as corner hub 82 heuristic principles of 10–12, 71–2 hubs 439–71, 440–9 Tab A5.1 hubs, configuration of 89, 94–103 hubs, size and distribution 95 Fig 3.13 Huddersfield core hub 89 influence of valleys and mountains 88 Fig 3.9 iterative process 64 Kirkby Lonsdale core hub 89 Leicester core hub 89 Lincolnshire region cross-country routes 119 London as corner hub 82 London terminals 155 loop principle 86–7 Melrose core hub 89, 158–9 mileage 437 Tab A4.4 Newcastle as corner hub 82 North-South linkages 148 North-South spine with ribs 75 Fig 3.3 objections to 12–14 optimality of the system 91–3 performance compared to actual system 64–5, 65 Tab 3.2 performance metrics 63–6 quality of network 392 Tab A4.1 and rational principles 322 Reading core hub 89 role of network 392, 393 Tab A4.2 route description 392–438, 393–436 Tab A4.3 and Severn Tunnel 112–14 Shoreham as corner hub 82 Southampton as corner hub 82 space-filling principle 87–9 Steiner solution 76 Fig 3.4 Steiner solution with traffic weighting 78 Fig 3.5 Stoke-on-Trent as corner hub 89 timetable 8, 89–90, 472–6, 474–6 Tab A6.1 timetable compared with actual 315–16 504 Index counterfactual railway network (cont.) traffic flows 66–71 traffic-weighting principle 77, 78 Fig 3.5 trial solution, first 89–91, 90 Fig 3.10, 91 Fig 3.11, 92 Fig 3.12 triangle principle 77–80, 79 Fig 3.6, 89, 96 triangle principle without traffic weighting 79 Fig 3.6 Trowbridge core hub 89 Warrington as corner hub 82 Wetherby core hub 122 country towns avoided by railway schemes 307–9 Coventry 68, 118, 135 Coventry Canal 117 Crafts, Nicholas F.

 

pages: 444 words: 86,565

Investment Banking: Valuation, Leveraged Buyouts, and Mergers and Acquisitions by Joshua Rosenbaum, Joshua Pearl, Joseph R. Perella

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

asset allocation, asset-backed security, bank run, barriers to entry, capital asset pricing model, collateralized debt obligation, corporate governance, credit crunch, discounted cash flows, diversification, fixed income, London Interbank Offered Rate, performance metric, shareholder value, sovereign wealth fund, technology bubble, time value of money, transaction costs, yield curve

First, we benchmark the key financial statistics and ratios for the target and its comparables in order to establish relative positioning, with a focus on identifying the closest or “best” comparables and noting potential outliers. Second, we analyze and compare the trading multiples for the peer group, placing particular emphasis on the best comparables. Benchmark the Financial Statistics and Ratios The first stage of the benchmarking analysis involves a comparison of the target and comparables universe on the basis of key financial performance metrics. These metrics, as captured in the financial profile framework outlined in Steps I and III, include measures of size, profitability, growth, returns, and credit strength. They are core value drivers and typically translate directly into relative valuation. The results of the benchmarking exercise are displayed on spreadsheet output pages that present the data for each company in an easy-to-compare format (see Exhibits 1.53 and 1.54).

EXHIBIT 3.38 ValueCo Projected Taxes Capex Projections We projected ValueCo’s capex as a percentage of sales in line with historical levels. As shown in Exhibit 3.39, this approach led us to hold capex constant throughout the projection period at 2% of sales. Based on this assumption, capex increases from $21.6 million in 2009E to $25.3 million in 2013E. EXHIBIT 3.39 ValueCo Historical and Projected Capex Change in Net Working Capital Projections As with ValueCo’s other financial performance metrics, historical working capital levels normally serve as reliable indicators of future performance. The direct prior year’s ratios are typically the most indicative provided they are consistent with historical levels. This was the case for ValueCo’s 2007 working capital ratios, which we held constant throughout the projection period (see Exhibit 3.40). EXHIBIT 3.40Valueco Historical and Projected Net Working Capital For A/R, inventory, and A/P, respectively, these ratios are DSO of 60.2, DIH of 76.0, and DPO of 45.6.

 

pages: 484 words: 104,873

Rise of the Robots: Technology and the Threat of a Jobless Future by Martin Ford

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

3D printing, additive manufacturing, Affordable Care Act / Obamacare, AI winter, algorithmic trading, Amazon Mechanical Turk, artificial general intelligence, autonomous vehicles, banking crisis, Baxter: Rethink Robotics, Bernie Madoff, Bill Joy: nanobots, call centre, Capital in the Twenty-First Century by Thomas Piketty, Chris Urmson, Clayton Christensen, clean water, cloud computing, collateralized debt obligation, computer age, debt deflation, deskilling, diversified portfolio, Erik Brynjolfsson, factory automation, financial innovation, Flash crash, Fractional reserve banking, Freestyle chess, full employment, Goldman Sachs: Vampire Squid, High speed trading, income inequality, indoor plumbing, industrial robot, informal economy, iterative process, Jaron Lanier, job automation, John Maynard Keynes: technological unemployment, John von Neumann, Khan Academy, knowledge worker, labor-force participation, labour mobility, liquidity trap, low skilled workers, low-wage service sector, Lyft, manufacturing employment, McJob, moral hazard, Narrative Science, Network effects, new economy, Nicholas Carr, Norbert Wiener, obamacare, optical character recognition, passive income, performance metric, Peter Thiel, Plutocrats, plutocrats, post scarcity, precision agriculture, price mechanism, Ray Kurzweil, rent control, rent-seeking, reshoring, RFID, Richard Feynman, Richard Feynman, Rodney Brooks, secular stagnation, self-driving car, Silicon Valley, Silicon Valley startup, single-payer health, software is eating the world, sovereign wealth fund, speech recognition, Spread Networks laid a new fibre optics cable between New York and Chicago, stealth mode startup, stem cell, Stephen Hawking, Steve Jobs, Steven Levy, Steven Pinker, strong AI, Stuxnet, technological singularity, telepresence, telepresence robot, The Bell Curve by Richard Herrnstein and Charles Murray, The Coming Technological Singularity, Thomas L Friedman, too big to fail, Tyler Cowen: Great Stagnation, union organizing, Vernor Vinge, very high income, Watson beat the top human players on Jeopardy!, women in the workforce

Police departments across the globe are turning to algorithmic analysis to predict the times and locations where crimes are most likely to occur and then deploying their forces accordingly. The City of Chicago’s data portal allows residents to see both historical trends and real-time data in a range of areas that capture the ebb and flow of life in a major city—including energy usage, crime, performance metrics for transportation, schools and health care, and even the number of potholes patched in a given period of time. Tools that provide new ways to visualize data collected from social media interactions as well as sensors built into doors, turnstiles, and escalators offer urban planners and city managers graphic representations of the way people move, work, and interact in urban environments, a development that may lead directly to more efficient and livable cities.

He received the green light from IBM management in 2007 and set out to build, in his words, “the most sophisticated intelligence architecture the world has ever seen.”18 To do this, he drew on resources from throughout the company and put together a team consisting of artificial intelligence experts from within IBM as well as at top universities, including MIT and Carnegie Mellon.19 Ferrucci’s team, which eventually grew to include about twenty researchers, began by building a massive collection of reference information that would form the basis for Watson’s responses. This amounted to about 200 million pages of information, including dictionaries and reference books, works of literature, newspaper archives, web pages, and nearly the entire content of Wikipedia. Next they collected historical data for the Jeopardy! quiz show. Over 180,000 clues from previously televised matches became fodder for Watson’s machine learning algorithms, while performance metrics from the best human competitors were used to refine the computer’s betting strategy.20 Watson’s development required thousands of separate algorithms, each geared toward a specific task—such as searching within text; comparing dates, times, and locations; analyzing the grammar in clues; and translating raw information into properly formatted candidate responses. Watson begins by pulling apart the clue, analyzing the words, and attempting to understand what exactly it should look for.

 

pages: 324 words: 92,805

The Impulse Society: America in the Age of Instant Gratification by Paul Roberts

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

2013 Report for America's Infrastructure - American Society of Civil Engineers - 19 March 2013, 3D printing, accounting loophole / creative accounting, Affordable Care Act / Obamacare, American Society of Civil Engineers: Report Card, asset allocation, business process, Cass Sunstein, centre right, choice architecture, collateralized debt obligation, collective bargaining, corporate governance, corporate social responsibility, crony capitalism, David Brooks, delayed gratification, double helix, factory automation, financial deregulation, financial innovation, full employment, game design, greed is good, If something cannot go on forever, it will stop, impulse control, income inequality, inflation targeting, invisible hand, job automation, Joseph Schumpeter, knowledge worker, late fees, Long Term Capital Management, loss aversion, low skilled workers, new economy, Nicholas Carr, obamacare, Occupy movement, oil shale / tar sands, performance metric, postindustrial economy, profit maximization, Report Card for America’s Infrastructure, reshoring, Richard Thaler, rising living standards, Robert Shiller, Robert Shiller, Rodney Brooks, Ronald Reagan, shareholder value, Silicon Valley, speech recognition, Steve Jobs, technoutopianism, the built environment, The Predators' Ball, the scientific method, The Wealth of Nations by Adam Smith, Thorstein Veblen, too big to fail, total factor productivity, Tyler Cowen: Great Stagnation, Walter Mischel, winner-take-all economy

On the downside side, Autor told me, those jobs will always be low-wage “because the skills they use are generic and almost anyone can be productive at them within a couple of days.”34 And, in fact, there will likely be far more downsides to these jobs than upsides. For example, because Big Data will allow companies to more easily and accurately measure worker productivity, workers will be under constant pressure to meet specific performance metrics and will be subject to constant ratings, just as restaurants and online products are today. Companies will assess every data point that might affect performance, so that every aspect of employment, from applying for a job to the actual performance of duties, will become much more closely scrutinized and assessed. “If you’re a worker, there’ll be, like, credit scores,” Cowen told NPR.35 “There already are, to some extent.

There will be no middle class in the way we now understand the term: median income will be much lower than it is, and many of the poor will lack access to even basic public services, in part because the wealthy will resist tax increases. “Rather than balancing our budget with higher taxes or lower benefits,” Cowen says, “we will allow the real wages of many workers to fall, and thus we will allow the creation of a new underclass.” Certain critics have found such dystopic visions far too grim. And yet, the signs of such a future are everywhere. Already, companies are using Big Data performance metrics to determine whom to cut—meaning that to be laid off is to be branded unemployable. In the ultimate corruption of innovation, a technology that might be used to help workers upgrade their skills and become more secure is instead being use to harass them. To be sure, Big Data will be put to more beneficial uses. Digital technologies will certainly remake the way we deliver education, for example.

 

pages: 323 words: 90,868

The Wealth of Humans: Work, Power, and Status in the Twenty-First Century by Ryan Avent

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

3D printing, Airbnb, American energy revolution, autonomous vehicles, Bakken shale, barriers to entry, Bernie Sanders, BRICs, call centre, Capital in the Twenty-First Century by Thomas Piketty, Clayton Christensen, cloud computing, collective bargaining, computer age, dark matter, David Ricardo: comparative advantage, deindustrialization, dematerialisation, Deng Xiaoping, deskilling, Dissolution of the Soviet Union, Donald Trump, Downton Abbey, Edward Glaeser, Erik Brynjolfsson, eurozone crisis, everywhere but in the productivity statistics, falling living standards, first square of the chessboard, first square of the chessboard / second half of the chessboard, Ford paid five dollars a day, Francis Fukuyama: the end of history, future of work, gig economy, global supply chain, global value chain, hydraulic fracturing, income inequality, indoor plumbing, industrial robot, interchangeable parts, Internet of things, inventory management, invisible hand, Jacquard loom, James Watt: steam engine, Jeff Bezos, John Maynard Keynes: Economic Possibilities for our Grandchildren, Joseph-Marie Jacquard, knowledge economy, low skilled workers, lump of labour, Lyft, manufacturing employment, means of production, new economy, performance metric, pets.com, price mechanism, quantitative easing, Ray Kurzweil, rent-seeking, reshoring, rising living standards, Robert Gordon, Ronald Coase, savings glut, Second Machine Age, secular stagnation, self-driving car, sharing economy, Silicon Valley, single-payer health, software is eating the world, supply-chain management, supply-chain management software, TaskRabbit, The Nature of the Firm, The Spirit Level, The Wealth of Nations by Adam Smith, Thomas Malthus, trade liberalization, transaction costs, Tyler Cowen: Great Stagnation, Uber and Lyft, Uber for X, very high income, working-age population

That knowledge is absorbed by newer employees over time, through long exposure to the old habits. What our firm is, is not so much a business that produces a weekly magazine, but a way of doing things consisting of an enormous set of processes. You run that programme, and you get a weekly magazine at the end of it. Employees want job security, to advance, to receive pay rises. Those desires are linked to tangible performance metrics; within The Economist, it matters that a writer delivers the expected stories with the expected frequency and with the expected quality. Yet that is not all that matters. Advancement is also about the extent to which a worker thrives within a culture. What constitutes thriving depends on the culture. In some firms, it may mean buttering up the bosses and working long hours. In others, it may mean the practice of Machiavellian office politics.

The information-processing role of the firm can help us to understand the phenomenon of ‘disruption’, in which older businesses struggle to adapt to powerful new technologies or market opportunities. The notion of a ‘disruptive’ technology was first described in detail by Clayton Christensen, a scholar at Harvard Business School.4 Disruption is one of the most important ideas in business and management to emerge over the last generation. A disruptive innovation, in Christensen’s sense, is one that is initially not very good, in the sense that it does badly on the performance metrics that industry leaders care about, but which then catches on rapidly, wrong-footing older firms and upending the industry. Christensen explained his idea through the disk-drive industry, which was once dominated by large, 8-inch disks that could hold lots of information and access it very quickly. Both disk-drive makers and their customers initially thought that smaller drives were of little practical use.

 

pages: 132 words: 31,976

Getting Real by Jason Fried, David Heinemeier Hansson, Matthew Linderman, 37 Signals

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

call centre, collaborative editing, iterative process, John Gruber, knowledge worker, Merlin Mann, Metcalfe's law, performance metric, premature optimization, slashdot, Steve Jobs, web application

Complexity Does Not Scale Linearly With Size The most important ruleof software engineering is also the least known: Complexity does not scale linearly with size...A 2000 line program requires more than twice as much development time as one half the size. —The Ganssle Group (from Keep It Small) Table of contents | Essay list for this chapter | Next essay Optimize for Happiness Choose tools that keep your team excited and motivated A happy programmer is a productive programmer. That's why we optimize for happiness and you should too. Don't just pick tools and practices based on industry standards or performance metrics. Look at the intangibles: Is there passion, pride, and craftmanship here? Would you truly be happy working in this environment eight hours a day? This is especially important for choosing a programming language. Despite public perception to the contrary, they are not created equal. While just about any language can create just about any application, the right one makes the effort not merely possible or bearable, but pleasant and invigorating.

 

pages: 128 words: 38,187

The New Prophets of Capital by Nicole Aschoff

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

3D printing, affirmative action, Affordable Care Act / Obamacare, Airbnb, Bretton Woods, clean water, collective bargaining, crony capitalism, feminist movement, follow your passion, Food sovereignty, glass ceiling, global supply chain, global value chain, helicopter parent, hiring and firing, income inequality, Khan Academy, late capitalism, Lyft, Mark Zuckerberg, means of production, performance metric, profit motive, rent-seeking, Ronald Reagan, Rosa Parks, school vouchers, shareholder value, sharing economy, Silicon Valley, Slavoj Žižek, structural adjustment programs, Thomas L Friedman, Tim Cook: Apple, urban renewal, women in the workforce, working poor

But they are not, and feminist ideals cannot be achieved if they are pursued Sandberg-style. Women who channel their energies toward reaching the top of corporate America undermine the struggles of women trying to realize institutional change by organizing unions and implementing laws that protect women (and men) in the workplace. An anecdote shared by Sandberg illustrates this point: In 2010 Mark Zuckerberg pledged $100 million to improve the performance metrics of the Newark Public Schools. The money would be distributed through a new foundation called Startup: Education. Sandberg recommended Jen Holleran, a woman she knew “with deep knowledge and experience in school reform” to run the foundation. The only problem was that Jen was raising fourteen-month-old twins at the time, working part time, and not getting much help from her husband. Jen hesitated to accept the offer, fearful of “upsetting the current order” at home.

 

pages: 892 words: 91,000

Valuation: Measuring and Managing the Value of Companies by Tim Koller, McKinsey, Company Inc., Marc Goedhart, David Wessels, Barbara Schwimmer, Franziska Manoury

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

air freight, barriers to entry, Basel III, BRICs, business climate, business process, capital asset pricing model, capital controls, cloud computing, compound rate of return, conceptual framework, corporate governance, corporate social responsibility, credit crunch, Credit Default Swap, discounted cash flows, distributed generation, diversified portfolio, energy security, equity premium, index fund, iterative process, Long Term Capital Management, market bubble, market friction, meta analysis, meta-analysis, new economy, p-value, performance metric, Ponzi scheme, price anchoring, purchasing power parity, quantitative easing, risk/return, Robert Shiller, Robert Shiller, shareholder value, six sigma, sovereign wealth fund, speech recognition, technology bubble, time value of money, too big to fail, transaction costs, transfer pricing, value at risk, yield curve, zero-coupon bond

Equal attention is paid to the long-term value-creating intent behind short-term profit targets, and people across the company are in constant communication about the adjustments needed to stay in line with long-term performance goals. We approach performance management from both an analytical and an organizational perspective. The analytical perspective focuses first on ensuring that companies use the right metrics at the right level in the organization. Companies should not just rely on performance metrics for divisions or business units, but disaggregate performance to the level of individual business segments. In addition to historical performance measures, companies need to use diagnostic metrics that help them understand and manage their ability to create value over the longer term. Second, we analyze how to set appropriate targets, giving examples of analytically sound performance measurement in action.

At some point, expansion of market share and sales will require additional production capacity. Once that point is reached, the associated 6 For example, declining sales in one segment would imply increasing capital allocated to other segments even if their sales would be unchanged. 592 PERFORMANCE MANAGEMENT investments and operating costs need to be factored in for target setting in individual business segments. The Right Metrics in Action Choosing the right performance metrics can provide new insights into how a company might improve its performance in the future. For instance, Exhibit 26.8 illustrates the most important value drivers for a pharmaceutical company. The exhibit shows the key value drivers, the company’s current performance relative to best- and worst-in-class benchmarks, its aspirations for each driver, and the potential value impact from meeting its targets.

The greatest value creation would come from three areas: accelerating the rate of release of new products from 0.5 to 0.8 per year, reducing from six years to four the time it takes for a new drug to reach 80 percent of peak sales, and cutting the cost of goods sold from 26 percent to 23 percent of sales. Some of the value drivers (such as new-drug development) are long-term, whereas others (such as reducing cost of goods sold) have a shorter-term focus. Similarly, focusing on the right performance metrics can help reveal what may be driving underperformance. A consumer goods company we know illustrates the importance of having a tailored set of key value metrics. For several years, a business unit showed consistent double-digit growth in economic profit. Since the financial results were consistently strong—in fact, the strongest across all the business units—corporate managers were pleased and did not ask many questions of the business unit.

 

pages: 302 words: 82,233

Beautiful security by Andy Oram, John Viega

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, Amazon Web Services, business intelligence, business process, call centre, cloud computing, corporate governance, credit crunch, crowdsourcing, defense in depth, en.wikipedia.org, fault tolerance, Firefox, loose coupling, market design, Monroe Doctrine, new economy, Nicholas Carr, Nick Leeson, Norbert Wiener, optical character recognition, packet switching, performance metric, pirate software, Search for Extraterrestrial Intelligence, security theater, SETI@home, Silicon Valley, Skype, software as a service, statistical model, Steven Levy, The Wisdom of Crowds, Upton Sinclair, web application, web of trust, x509 certificate, zero day, Zimmermann PGP

Operational profile definition Explore Problem definition prioritizes key performance and capacity needs Architect for performance, capacity, and future growth Volume deploy Execute Performance budgets Performance targets Performance engineer begins work during requirements phase Annotated use cases and user scenarios Releases and iterations prioritized to validate key performance issues early Prototyping Performance estimates Benchmarks Performance measurements Code instrumentation Automated execution of performance and load tests Performance data capture Test tools/scripts for field measurement of performance/capacity Project management tracks performance metrics FIGURE 10-3. Best practices dependencies: Performance and Capacity SECURITY BY DESIGN 177 Explore Problem definition prioritizes key functions needed Operational profile definition Reliability engineer begins work during requirements phase to understand critical functions and constraints Tune physical and functional architecture for reliability and Define acceptable failure and Annotated use cases and user scenarios availability recovery rates– availability and reliability targets Predict expected reliability and availability Releases and iterations prioritized to handle capabilities early Fault/failure injection testing Failure data collected and analyzed and predictions made Fault detection, System auditing and isolation, and repair sanity control Automated execution Project management tracks of stability testing Code instrumentation quality index Volume deploy Execute Reliability budgets for failure and recovery rates Reliability and availability data capture Field measurement of failures and recovery FIGURE 10-4.

I initially dreaded this decision since it limited the leverage I had to encourage project leaders to identify and remediate security vulnerabilities. The results proved that this decision actually increased compliance with the security plan. With the requirement to pass the static analysis test still hanging over teams, they felt the need to remove defects earlier in the lifecycle so that they would avoid last-minute rejections. The second decision was the implementation of a detailed reporting framework in which key performance metrics (for instance, percentage of high-risk vulnerabilities per lines of code) were shared with team leaders, their managers, and the CIO on a monthly basis. The vulnerability information from the static code analyzer was summarized at the project, portfolio, and organization level and shared with all three sets of stakeholders. Over time, development leaders focused on the issues that were raising their risk score and essentially competed with each other to achieve better results.

 

pages: 493 words: 139,845

Women Leaders at Work: Untold Tales of Women Achieving Their Ambitions by Elizabeth Ghaffari

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, AltaVista, business process, cloud computing, Columbine, corporate governance, corporate social responsibility, dark matter, family office, Fellow of the Royal Society, financial independence, follow your passion, glass ceiling, Grace Hopper, high net worth, knowledge worker, Long Term Capital Management, performance metric, pink-collar, profit maximization, profit motive, recommendation engine, Ronald Reagan, shareholder value, Silicon Valley, Silicon Valley startup, Steve Ballmer, Steve Jobs, thinkpad, trickle-down economics, urban planning, women in the workforce, young professional

Trying to do the best for them. My whole academic and personal upbringing was working with physicians. So I don’t view physicians as the enemy. It just doesn’t make good business sense. Ghaffari: How many departments did you end up having under you? Luttgens: I had a total of ten professional services departments. Most of them were physician-led or physician-supported. Ghaffari: What was your performance metric that you did for them? Luttgens: Back in those days, the early eighties, we didn’t have quality management or outcomes as we do today. You needed to control expenses, enhance revenue, increase patient volume, and get along. I was well-known around the medical center for getting substantial capital funding for items in my capital budgets each year. Most of my departments were very capital-intensive.

It was a big change to run a nonprofit where a major part of your job is fundraising. That taught me that I was both good at, and enjoyed, fundraising because I understood the customer and believed in the product. Ghaffari: Was your primary responsibility there in an executive director role? What were some of your key accomplishments? Roden: Yes. Regarding accomplishments, we tracked several metrics. First of all, sponsorship was an important performance metric. When I started, SVASE was bringing in about $10,000 a year in sponsorship. When I left, it was $300,000 a year. Another key metric was the mailing list. When I started, we had about two thousand people on our e-mail list. When I left, it was about twenty thousand people. When I started, we had about twenty volunteers. When I left, we had about two hundred and fifty volunteers, meaning people actively engaged in running parts of the organization.

 

pages: 320 words: 33,385

Market Risk Analysis, Quantitative Methods in Finance by Carol Alexander

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

asset allocation, backtesting, barriers to entry, Brownian motion, capital asset pricing model, constrained optimization, credit crunch, Credit Default Swap, discounted cash flows, discrete time, diversification, diversified portfolio, en.wikipedia.org, implied volatility, interest rate swap, market friction, market microstructure, p-value, performance metric, quantitative trading / quantitative finance, random walk, risk tolerance, risk-adjusted returns, risk/return, Sharpe ratio, statistical arbitrage, statistical model, stochastic process, stochastic volatility, transaction costs, value at risk, volatility smile, Wiener process, yield curve

We describe some standard utility functions that display different risk aversion characteristics and show how an investor’s utility determines his optimal portfolio. Then we solve the portfolio allocation decision for a risk averse investor, following and then generalizing the classical problem of portfolio selection that was introduced by Markowitz (1959). This lays the foundation for our review of the theory of asset pricing, and our critique of the many risk adjusted performance metrics that are commonly used by asset managers. ABOUT THE CD-ROM My golden rule of teaching has always been to provide copious examples, and whenever possible to illustrate every formula by replicating it in an Excel spreadsheet. Virtually all the concepts in this book are illustrated using numerical and empirical examples, and the Excel workbooks for each chapter may be found on the accompanying CD-ROM.

Many risk adjusted performance measures that are commonly used today are either not linked to a utility function at all, or if they are associated with a utility function we assume the investor cares nothing at all about the gains he makes above a certain threshold. Kappa indices can be loosely tailored to the degree of risk aversion of the investor, but otherwise the rankings produced by the risk adjusted performance measure may not be ranking in the order of an investor’s preference! The only universal risk adjusted performance metric, i.e. one that can rank investments having any returns distributions for investors having any type of utility function, is the certain equivalent. The certain equivalent of an uncertain investment is the amount of money, received for certain, that gives the same utility to the investor as the uncertain investment. References Adjaouté, K. and Danthine, J.P. (2004) Equity returns and integration: Is Europe changing?

 

How I Became a Quant: Insights From 25 of Wall Street's Elite by Richard R. Lindsey, Barry Schachter

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, algorithmic trading, Andrew Wiles, Antoine Gombaud: Chevalier de Méré, asset allocation, asset-backed security, backtesting, bank run, banking crisis, Black-Scholes formula, Bonfire of the Vanities, Bretton Woods, Brownian motion, business process, buy low sell high, capital asset pricing model, centre right, collateralized debt obligation, corporate governance, correlation coefficient, Credit Default Swap, credit default swaps / collateralized debt obligations, currency manipulation / currency intervention, discounted cash flows, disintermediation, diversification, Emanuel Derman, en.wikipedia.org, Eugene Fama: efficient market hypothesis, financial innovation, fixed income, full employment, George Akerlof, Gordon Gekko, hiring and firing, implied volatility, index fund, interest rate derivative, interest rate swap, John von Neumann, linear programming, Loma Prieta earthquake, Long Term Capital Management, margin call, market friction, market microstructure, martingale, merger arbitrage, Nick Leeson, P = NP, pattern recognition, pensions crisis, performance metric, prediction markets, profit maximization, purchasing power parity, quantitative trading / quantitative finance, QWERTY keyboard, RAND corporation, random walk, Ray Kurzweil, Richard Feynman, Richard Feynman, Richard Stallman, risk-adjusted returns, risk/return, shareholder value, Sharpe ratio, short selling, Silicon Valley, six sigma, sorting algorithm, statistical arbitrage, statistical model, stem cell, Steven Levy, stochastic process, systematic trading, technology bubble, The Great Moderation, the scientific method, too big to fail, trade route, transaction costs, transfer pricing, value at risk, volatility smile, Wiener process, yield curve, young professional

In the early 1990s, the entire banking industry was moving headlong toward Raroc as a pricing and performance measurement framework. However, as early as 1992, I recognized that the common Raroc measure based on own portfolio risk or VaR was at odds with equilibrium and arbitrage pricing theory (see Wilson (1992)). Using classical finance to make the point, I recast a simple CAPM model into a Raroc performance metric and showed that Raroc based on own portfolio risk without the recognition of funding was inherently biased. In the years since 1992, many other authors have followed a similar line of thought. What is the appropriate cost of capital, by line of business, if capital is allocated based on the standalone risk of each underlying business? And, what role does earnings volatility play in the valuation of a bank or insurance company?

See Credit risk integrated tool set, application, 80 technology, usage, 134–135 Portfolio optimization, 281–283 “Portfolio Optimization with Factors, Scenarios, and Realistic Short Positions,” 281 Portfolio Theory (Levy/Sarnat), 228 Portfolio trading, mathematics, 128–130 Positive interest rates, ensuring, 161–162 Prepayment data, study, 183 Press, Bill, 36 Price/book controls, pure return, 272 Price data, study, 183 Price/earnings ratios, correlation, 269 Price limits, impact, 77 Primitive polynomial modulo two, 170 Prisoner’s dilemma, 160 Private equity returns, benchmarks (finding), 145 Private signals, quality (improvement), 159–160 Publicly traded contingent claims, combinations (determination), 249 Public pension funds, investment, 25 Pure mathematics, 119, 126 Quantitative active management, growth, 46–47 Quantitative approach, usage, 26–27 Quantitative finance, 237–238 purpose, 96–98 Quantitative Financial Research (Bloomberg), 137 Quantitative investing, limitation, 209 Quantitative label, implication, 25–26 Quantitative methods, role, 96–97 Quantitative Methods for Financial Analysis (Stephen/Kritzman), 253 Quantitative models, enthusiasm, 234 Quantitative portfolio management, 130–131 Quantitative strategies, usage, 240 Quantitative Strategies (SAC Capital Management, LLC), 107 Quantitative training, application process, 255–260 Quants business sense, discussion, 240–241 characteristics/description, 208–210 conjecture, 177–179 conversion, 327 data mining, 209–210 description, New York Times article, 32 due diligence, requirement, 169 future, 13–16, 261 innovations, 255–258 myths, dispelling, 258–260 perspective, change, 134–135 process, 92–93 research, 127–128 Quigg, Laura, 156–158, 160 Quotron, recorded data (usage), 22 Rahl, Leslie, 83–93 Ramaswamy, Krishna, 253 385 RAND Corporation, 13–17 Raroc models, usage/development, 102–103 Raroc performance metric, 103 Reagan, Ronald, 15 Real economic behavior, level (usefulness), 101 Real options (literature), study, 149 Real-time artificial intelligence, 16 Rebonato, Riccardo, 168, 169, 232 Reed, John, 89 Registered investment advisors, 79 Regression, time-varying, 239 Renaissance Medallion fund, 310 Representation Theory and Complex Geometry, 122–125 Resampling statistics, usage, 239–240 Research collaboration, type, 157–158 Research Objectivity Standards, 280–281 Retail markets, changes, 148–149 Return, examination, 71–72 Return-predictor relationships, 269 Returns separation, 34–35 variance, increasing, 72 “Revenue Recognition Certificates: A New Security” (LeClair/Schulman), 82 Rich, Don, 256 Riemann Hypothesis, solution, 108 Risk analytics, sale, 301 bank rating, 216 buckets, 71 cost, 129 examination, 70–71 forecast, BARRA bond model (usage), 39 importance, 34–35 manager, role, 302–303 reversal, 299 worries, 39 Risk-adjusted return, 102 Risk management, 233 consulting firm, 293 technology, usage, 134–135 world developments, 96 Risk Management (Clinton Group), 295 Risk Management & Quantitative Research (Permal Group), 227 RiskMetrics, 300–301 business, improvement, 301 computational device, 240 Technical Document, publication (1996), 66 Risk/return trade-off, 259 RJR Nabisco, LBO, 39 Roll, Richard, 140 Ronn, Ehud, 157, 160–162 Rosenberg, Barr, 34–42 models, development, 34–37 Rosenbluth, Jeff, 132 Ross, Stephen A., 141, 254, 336 arbitrage pricing model, development, 147–148 Rubinstein, Mark, 278, 336 P1: OTE/PGN JWPR007-Lindsey P2: OTE January 1, 1904 6:33 386 Rudd, Andrew, 35, 307 historical performance analysis, 44 Rudy, Rob, 219 Russell 3000, constitution, 275 Salomon Brothers, Bloomberg (employ), 73 Samuelson, Paul, 256–257 time demonstration, 258 Sankar, L., 162 Sargent, Thomas, 188 Savine, Antoine, 167 Sayles, Loomis, 33 SBCC, 285 Scholes, Myron, 11, 88, 177, 336 input, 217 Schulman, Evan, 67–82 Schwartz, Robert J., 293, 320 Secret, classification, 16–18 Securities Act of 1933, 147 Securities Exchange Act of 1934, 147 Security replication, probability (usage), 122 SETS, 77 Settlement delays, 174 Seymour, Carl, 175–176 Shareholder value creation, questions, 98 Sharpe, William, 34, 254 algorithm, 257–258 modification, 258 Shaw, Julian, 227–242 Sherring, Mike, 232 Short selling, 275–276 Short selling, risk-reducing/returnenhancing benefits, 277 Short-term reversal strategy, 198–199 Shubik, Martin, 288–289, 291, 293 Siegel’s Paradox, 321–322 Sklar’s theorem, 240 Slawsky, Al, 40–41 Small-cap stocks, purchase, 268 Smoothing, 192–193 Sobol’ numbers, 173–173 Social Sciences Research Network (SSRN), 122 Social Security system, bankruptcy, 148 Society for Quantitative Analysis (SQA), 253 Spatt, Chester, 252 Spot volatility, parameter, 89–90 Standard & Poor’s futures, price (relationship), 75 INDEX Start-up company, excitement, 24–25 Statistical data analysis, 213–214 Statistical error, 228 Sterge, Andrew J., 317–327 Stevens, Ross, 201 Stochastic calculus, 239 Stock market crash (1987), 282 Stocks portfolio trading, path trace, 129 stories, analogy, 23–26 Strategic Business Development (RiskMetrics Group), 49 Sugimoto, E., 171 Summer experience, impact, 57 Sun Unix workstation, 22 Surplus insurance, usage, 255–256 Swaps rate, Black volatilities, 172 usage, 292–293 Sweeney, Richard, 190 Symbolics, 16, 18 Taleb, Nassim, 132 Tenenbein, Aaron, 252 Textbook learning, expansion, 144 Theoretical biases, 103 Theory, usage/improvement, 182–185 Thornton, Dan, 139 Time diversification, myths, 258 Top secret, classification, 16–18 Tracking error, focus, 80–81 Trading, 72–73 Transaction cost, 129 absence, 247 impact, 273–274 Transaction pricing, decision-making process, 248 Transistor experiment (TX), 11 Transistorized Experimental Computer Zero (tixo), usage, 86 Treynor, Jack, 34, 254 Trigger, usage, 117–118 Trimability, 281 TRS-80 (Radio Shack), usage, 50, 52, 113 Trust companies, individually managed accounts (growth), 79 Tucker, Alan, 334 Uncertainty examination, 149–150 resolution, 323–324 Unit initialization, 172 Universal Investment Reasoning, 19–20 Upstream Technologies, LLC, 67 U.S. individual stock data, research, 201–202 Value-at-Risk (VaR), 195. calculation possibility tails, changes, 100 design, 293 evolution, 235 measurement, 196 number, emergence, 235 van Eyseren, Olivier, 173–175 Vanilla interest swaptions, 172 VarianceCoVariance (VCV), 235 Variance reduction techniques, 174 Vector auto-regression (VAR), 188 Venture capital investments, call options (analogy), 145–146 Volatility, 100, 174, 193–194 Volcker, Paul, 32 von Neumann, John, 319 Waddill, Marcellus, 318 Wall Street business, arrival, 61–65 interest, 160–162 move, shift, 125–127 quant search, genesis, 32 roots, 83–85 Wanless, Derek, 173 Wavelets, 239 Weisman, Andrew B., 187–196 Wells Fargo Nikko Investment Advisors, Grinold (attachment), 44 Westlaw database, 146–148 “What Practitioners Need to Know” (Kritzman), 255 Wigner, Eugene, 54 Wiles, Andrew, 112 Wilson, Thomas C., 95–105 Windham Capital Management LLC, 251, 254 Wires, rat consumption (prevention), 20–23 Within-horizon risk, usage, 256 Worker longevity, increase, 148 Wyckoff, Richard D., 321 Wyle, Steven, 18 Yield, defining, 182 Yield curve, 89–90, 174 Zimmer, Bob, 131–132

 

pages: 559 words: 155,372

Chaos Monkeys: Obscene Fortune and Random Failure in Silicon Valley by Antonio Garcia Martinez

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Airbnb, airport security, Amazon Web Services, Burning Man, Celtic Tiger, centralized clearinghouse, cognitive dissonance, collective bargaining, corporate governance, Credit Default Swap, crowdsourcing, death of newspapers, El Camino Real, Elon Musk, Emanuel Derman, financial independence, global supply chain, Goldman Sachs: Vampire Squid, hive mind, income inequality, interest rate swap, intermodal, Jeff Bezos, Malcom McLean invented shipping containers, Mark Zuckerberg, Maui Hawaii, means of production, Menlo Park, minimum viable product, move fast and break things, Network effects, Paul Graham, performance metric, Peter Thiel, Ponzi scheme, pre–internet, Ralph Waldo Emerson, random walk, Sand Hill Road, Scientific racism, second-price auction, self-driving car, Silicon Valley, Silicon Valley startup, Skype, Snapchat, social graph, social web, Socratic dialogue, Steve Jobs, telemarketer, urban renewal, Y Combinator, éminence grise

I even hung a real length of Spanish chorizo from my monitor, as a rallying symbol, and the targeting team got down to the serious business of monetizing every last user action on Facebook. Just as my first view of Facebook’s high-level revenue dashboard proved a dispiriting exercise, Chorizo’s final results, which took months to produce, were a similar tale of woe. No user data we had, if fed freely into the topics that Facebook’s savviest marketers used to target their ads, improved any performance metric we had access to. That meant that advertisers trying to find someone who, say, wanted to buy a car, benefited not at all from all the car chatter taking place on Facebook. It was as if we had fed a mile-long trainful of meat cows into a slaughterhouse, and had come out with one measly sausage to show for it. It was incomprehensible, and it tested my faith (which, believe it or not, I certainly had at that time) in Facebook’s claim to unique primacy in the realm of user data.

Immature advertising markets, the embryonic state of their e-commerce infrastructure, and their lower general wealth meant the impact of new optimization tricks or targeting data on those countries was minimal. And so the Ads team would slice off tranches of the FB user base in rich ads markets and dose them with different versions of the ads system to measure the effect of a new feature, as you would test subjects in a clinical drug trial.* The performance metrics of interest included clickthrough rates, which are a coarse measure of user interest. More convincing is the actual downstream monetization resulting from someone clicking through and buying something—assuming Facebook got the conversion data, which it often didn’t, given that Facebook didn’t have a conversion-tracking system. Also important, and not related to money at all, was overall usage.

 

pages: 133 words: 42,254

Big Data Analytics: Turning Big Data Into Big Money by Frank J. Ohlhorst

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

algorithmic trading, bioinformatics, business intelligence, business process, call centre, cloud computing, create, read, update, delete, data acquisition, DevOps, fault tolerance, linked data, natural language processing, Network effects, pattern recognition, performance metric, personalized medicine, RFID, sentiment analysis, six sigma, smart meter, statistical model, supply-chain management, Watson beat the top human players on Jeopardy!, web application

That is why it is important to build objectives, measurements, and milestones that demonstrate the benefits of a team focused on Big Data analytics. Developing performance measurements is an important part of designing a business plan. With Big Data, those metrics can be assigned to the specific goal in mind. For example, if an organization is looking to bring efficiency to a warehouse, a performance metric may be measuring the amount of empty shelf space and what the cost of that empty shelf space means to the company. Analytics can be used to identify product movement, sales predictions, and so forth to move product into that shelf space to better service the needs of customers. It is a simple comparison of the percentage of space used before the analytics process and the percentage of space used after the analytics team has tackled the issue.

 

pages: 204 words: 58,565

Keeping Up With the Quants: Your Guide to Understanding and Using Analytics by Thomas H. Davenport, Jinho Kim

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Black-Scholes formula, business intelligence, business process, call centre, computer age, correlation coefficient, correlation does not imply causation, Credit Default Swap, en.wikipedia.org, feminist movement, Florence Nightingale: pie chart, forensic accounting, global supply chain, Hans Rosling, hypertext link, invention of the telescope, inventory management, Jeff Bezos, margin call, Moneyball by Michael Lewis explains big data, Netflix Prize, p-value, performance metric, publish or perish, quantitative hedge fund, random walk, Renaissance Technologies, Robert Shiller, Robert Shiller, self-driving car, sentiment analysis, six sigma, Skype, statistical model, supply-chain management, text mining, the scientific method

MODELING (VARIABLE SELECTION). The variables in deciding whether to acquire Battier from the Grizzlies would be the cost of acquiring him (outright or in trade for other players), the amount that he would be paid going forward, various individual performance measures, and ideally some measure of team performance while Battier was on the court versus when he was not. DATA COLLECTION (MEASUREMENT). The individual performance metrics and financials were easy to gather. And there is a way to measure an individual player’s impact on team performance. The “plus/minus” statistic, adapted by Roland Beech of 82games.com from a similar statistic used in hockey, compares how a team performs with a particular player in the game versus its performance when he is on the bench. DATA ANALYSIS. Morey and his statisticians decided to use plus/ minus analysis to evaluate Battier.

 

pages: 204 words: 54,395

Drive: The Surprising Truth About What Motivates Us by Daniel H. Pink

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

affirmative action, call centre, Daniel Kahneman / Amos Tversky, Dean Kamen, deliberate practice, Firefox, Frederick Winslow Taylor, game design, George Akerlof, Isaac Newton, Jean Tirole, job satisfaction, knowledge worker, performance metric, profit maximization, profit motive, Results Only Work Environment, side project, the built environment, Tony Hsieh, transaction costs

It's another way to allow people to focus on the work itself. Indeed, other economists have shown that providing an employee a high level of base pay does more to boost performance and organizational commitment than an attractive bonus structure. Of course, by the very nature of the exercise, paying above the average will work for only about half of you. So get going before your competitors do. 3. IF YOU USE PERFORMANCE METRICS, MAKE THEM WIDE-RANGING, RELEVANT, AND HARD TO GAME I magine you're a product manager and your pay depends largely on reaching a particular sales goal for the next quarter. If you're smart, or if you've got a family to feed, you're going to try mightily to hit that number. You probably won't concern yourself much with the quarter after that or the health of the company or whether the firm is investing enough in research and development.

 

pages: 261 words: 16,734

Peopleware: Productive Projects and Teams by Tom Demarco, Timothy Lister

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

A Pattern Language, cognitive dissonance, interchangeable parts, job satisfaction, knowledge worker, Parkinson's law, performance metric, skunkworks, supply-chain management, women in the workforce

Three rules of thumb seem to apply whenever you measure variations in performance over a sample of individuals. • Count on the best people outperforming the worst by about 10:1. • Count on the best performer being about 2.5 times better than the median performer. • Count on the half that are better-than-median performers outdoing the other half by more than 2:1. These rules apply for virtually any performance metric you define. So, for instance, the better half of a sample will do a given job in less than half the time the others take; the more defect-prone half will put in more than two thirds of the defects, and so on. Results of the Coding War Games were very much in line with this profile. Take as an example Figure 8–2, which shows the performance spread of time to achieve the first milestone (clean compile, ready for test) in one year’s games.

 

Toast by Stross, Charles

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

anthropic principle, Buckminster Fuller, cosmological principle, dark matter, double helix, Ernest Rutherford, Extropian, Francis Fukuyama: the end of history, glass ceiling, gravity well, Khyber Pass, Mars Rover, Mikhail Gorbachev, NP-complete, oil shale / tar sands, peak oil, performance metric, phenotype, Plutocrats, plutocrats, Ronald Reagan, Silicon Valley, slashdot, speech recognition, strong AI, traveling salesman, Turing test, urban renewal, Vernor Vinge, Whole Earth Review, Y2K

It was a woman I’d met somewhere—some conference or other—lanky blonde hair, palid skin, and far too evangelical about formal methods. “Feel free.” She pulled a chair out and sat down and the steward poured her a cup of coffee immediately. I noticed that even on a cruise ship she was dressed in a business suit, although it looked somewhat the worse for wear. “Coffee, please,” I called after the retreating steward. “We met in Darmstadt, `97,” she said. “You’re Marcus Jackman? I critiqued your paper on performance metrics for IEEE maintenance transactions.” The penny dropped. “Karla . . . Carrol?” I asked. She smiled. “Yes, I remember your review.” I did indeed, and nearly burned my tongue on the coffee trying not to let slip precisely how I remembered it. I’m not fit to be rude until after at least the third cup of the morning. “Most interesting. What brings you here?” “The usual risk contingency planning.

 

pages: 304 words: 82,395

Big Data: A Revolution That Will Transform How We Live, Work, and Think by Viktor Mayer-Schonberger, Kenneth Cukier

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

23andMe, Affordable Care Act / Obamacare, airport security, AltaVista, barriers to entry, Berlin Wall, big data - Walmart - Pop Tarts, Black Swan, book scanning, business intelligence, business process, call centre, cloud computing, computer age, correlation does not imply causation, dark matter, double entry bookkeeping, Eratosthenes, Erik Brynjolfsson, game design, IBM and the Holocaust, index card, informal economy, Internet of things, invention of the printing press, Jeff Bezos, Louis Pasteur, Mark Zuckerberg, Menlo Park, Moneyball by Michael Lewis explains big data, Nate Silver, natural language processing, Netflix Prize, Network effects, obamacare, optical character recognition, PageRank, performance metric, Peter Thiel, Post-materialism, post-materialism, random walk, recommendation engine, self-driving car, sentiment analysis, Silicon Valley, Silicon Valley startup, smart grid, smart meter, social graph, speech recognition, Steve Jobs, Steven Levy, the scientific method, The Signal and the Noise by Nate Silver, The Wealth of Nations by Adam Smith, Turing test, Watson beat the top human players on Jeopardy!

Grigsby, Pamela Ann Nesbitt, and Lisa Anne Seacat. “Securing premises using surfaced-based computing technology,” U.S. Patent number: 8138882. Issue date: March 20, 2012. The quantified-self movement—“Counting Every Moment,” The Economist, March 3, 2012. Apple earbuds for bio-measurements—Jesse Lee Dorogusker, Anthony Fadell, Donald J. Novotney, and Nicholas R Kalayjian, “Integrated Sensors for Tracking Performance Metrics,” U.S. Patent Application 20090287067. Assignee: Apple. Application Date: 2009-07-23. Publication Date: 2009-11-19. Derawi Biometrics, “Your Walk Is Your PIN-Code,” press release, February 21, 2011 (http://biometrics.derawi.com/?p=175). iTrem information—See the iTrem project page of the Landmarc Research Center at Georgia Tech (http://eosl.gtri.gatech.edu/Capabilities/LandmarcResearchCenter/LandmarcProjects/iTrem/tabid/798/Default.aspx) and email exchange.

 

pages: 294 words: 82,438

Simple Rules: How to Thrive in a Complex World by Donald Sull, Kathleen M. Eisenhardt

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Affordable Care Act / Obamacare, Airbnb, asset allocation, Atul Gawande, barriers to entry, Basel III, Berlin Wall, carbon footprint, Checklist Manifesto, complexity theory, Craig Reynolds: boids flock, Credit Default Swap, Daniel Kahneman / Amos Tversky, diversification, en.wikipedia.org, European colonialism, Exxon Valdez, facts on the ground, Fall of the Berlin Wall, haute cuisine, invention of the printing press, Isaac Newton, Kickstarter, late fees, Lean Startup, Louis Pasteur, Lyft, Moneyball by Michael Lewis explains big data, Nate Silver, Network effects, obamacare, Paul Graham, performance metric, price anchoring, RAND corporation, risk/return, Saturday Night Live, sharing economy, Silicon Valley, Startup school, statistical model, Steve Jobs, TaskRabbit, The Signal and the Noise by Nate Silver, transportation-network company, two-sided market, Wall-E, web application, Y Combinator, Zipcar

You can also limit your rules to two or three, as we have seen elsewhere in the book, to increase the odds that you will remember and follow them. After crafting your preliminary rules, it is helpful to measure how well they are working. Measuring impact allows you to pinpoint what is and isn’t working, and evidence of success also provides more motivation to stick with the rules. The best performance metrics are tightly linked to what will move the needles for you—pounds lost for a dieter, or dollars invested if you are trying to save for retirement. Apps have made collecting data and tracking progress easier than at any other time in history. Imagine what the legendary self-improver Benjamin Franklin could have accomplished if he’d had an iPhone. To measure the impact of your simple rules, it helps to collect some data before you start using your rules.

 

pages: 256 words: 15,765

The New Elite: Inside the Minds of the Truly Wealthy by Dr. Jim Taylor

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

British Empire, call centre, dark matter, Donald Trump, estate planning, full employment, glass ceiling, income inequality, Jeff Bezos, Louis Pasteur, Maui Hawaii, McMansion, means of production, passive income, performance metric, Plutocrats, plutocrats, Plutonomy: Buying Luxury, Explaining Global Imbalances, Ronald Reagan, stealth mode startup, Steve Jobs, Thorstein Veblen, trickle-down economics, women in the workforce

For any respondent who wanted it, we provided a coded identification number that enabled the individual to examine the results and reports for personal reasons. In some cases, we even let them examine their own data in comparison to others in the financial elite. For a generation of business men and women who believe in measurement, and who grew Debunking Paris Hilton 15 up with IQ tests, SAT scores, and other performance metrics, this quantitative capability was an often irresistible source of pleasure. This was particularly true because the individuals had been on a special journey, one their upbringings had left them largely unprepared for, and so understanding the journeys of others was a means for understanding their own trips and themselves. But there is a deeper, more telling reason the wealthy volunteered hours of their time for us.

 

pages: 1,088 words: 228,743

Expected Returns: An Investor's Guide to Harvesting Market Rewards by Antti Ilmanen

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Andrei Shleifer, asset allocation, asset-backed security, availability heuristic, backtesting, balance sheet recession, bank run, banking crisis, barriers to entry, Bernie Madoff, Black Swan, Bretton Woods, buy low sell high, capital asset pricing model, capital controls, Carmen Reinhart, central bank independence, collateralized debt obligation, commodity trading advisor, corporate governance, credit crunch, Credit Default Swap, credit default swaps / collateralized debt obligations, debt deflation, deglobalization, delta neutral, demand response, discounted cash flows, disintermediation, diversification, diversified portfolio, dividend-yielding stocks, equity premium, Eugene Fama: efficient market hypothesis, fiat currency, financial deregulation, financial innovation, financial intermediation, fixed income, Flash crash, framing effect, frictionless, frictionless market, George Akerlof, global reserve currency, Google Earth, high net worth, hindsight bias, Hyman Minsky, implied volatility, income inequality, incomplete markets, index fund, inflation targeting, interest rate swap, invisible hand, Kenneth Rogoff, laissez-faire capitalism, law of one price, Long Term Capital Management, loss aversion, margin call, market bubble, market clearing, market friction, market fundamentalism, market microstructure, mental accounting, merger arbitrage, mittelstand, moral hazard, New Journalism, oil shock, p-value, passive investing, performance metric, Ponzi scheme, prediction markets, price anchoring, price stability, principal–agent problem, private sector deleveraging, purchasing power parity, quantitative easing, quantitative trading / quantitative finance, random walk, reserve currency, Richard Thaler, risk tolerance, risk-adjusted returns, risk/return, riskless arbitrage, Robert Shiller, Robert Shiller, savings glut, Sharpe ratio, short selling, sovereign wealth fund, statistical arbitrage, statistical model, stochastic volatility, systematic trading, The Great Moderation, The Myth of the Rational Market, too big to fail, transaction costs, tulip mania, value at risk, volatility arbitrage, volatility smile, working-age population, Y2K, yield curve, zero-coupon bond

Most studies conclude that irrational mispricing contributes importantly to observed option market regularities. The rational camp responds that risk stories can explain a surprisingly large part of observed returns without resorting to irrationality—and that various market frictions can make exploiting any remaining opportunities difficult. Specifically, Broadie–Chernov–Johannes (2009) argue that options are often thought to be mispriced because the performance metrics that are used (Sharpe ratios and CAPM alphas) are ill-suited for option analysis, especially over short samples. After documenting the huge challenge for rational models—massively negative average returns for long index puts, losses of 30% per month, or worse, as noted earlier—they proceed to show that standard option-pricing models can largely explain these average returns. OTM puts are especially highly levered positions on the underlying index; during a period of high realized equity premium, OTM puts with large negative betas can be expected to have large negative returns.

Operational risks (errors and fraud) are a good example; the SR of Madoff’s track record was hard to beat but it came with huge operational risk. Conclusions The portfolio SR is a good starting point but it needs to be supplemented with other portfolio attributes. All of the desirable attributes discussed above may be worth some SR sacrifice. However, no single risk-adjusted return measure can capture them all, and many of these tradeoffs can only be assessed in a qualitative fashion. Multiple performance metrics are needed, given the multi-dimensional nature of the problem. 28.2.4 Smart risk taking and portfolio construction There now follow some intuitive rules of thumb for smart investing: a recipe for optimal diversification and the “fundamental law of active management”. First, here is a recipe for smart portfolio construction, which sums up mean variance optimization in a nutshell: allocate equal volatility to each asset class (or return source) in a portfolio, unless some assets’ exceptional SRs or diversification abilities justify deviating from equal volatility weightings.

 

pages: 903 words: 235,753

The Stack: On Software and Sovereignty by Benjamin H. Bratton

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

1960s counterculture, 3D printing, 4chan, Ada Lovelace, additive manufacturing, airport security, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, algorithmic trading, Amazon Mechanical Turk, Amazon Web Services, augmented reality, autonomous vehicles, Berlin Wall, bioinformatics, bitcoin, blockchain, Buckminster Fuller, Burning Man, call centre, carbon footprint, carbon-based life, Cass Sunstein, Celebration, Florida, charter city, clean water, cloud computing, connected car, corporate governance, crowdsourcing, cryptocurrency, dark matter, David Graeber, deglobalization, dematerialisation, disintermediation, distributed generation, don't be evil, Douglas Engelbart, Edward Snowden, Elon Musk, en.wikipedia.org, Eratosthenes, ethereum blockchain, facts on the ground, Flash crash, Frank Gehry, Frederick Winslow Taylor, future of work, Georg Cantor, gig economy, global supply chain, Google Earth, Google Glasses, Guggenheim Bilbao, High speed trading, Hyperloop, illegal immigration, industrial robot, information retrieval, intermodal, Internet of things, invisible hand, Jacob Appelbaum, Jaron Lanier, Jony Ive, Julian Assange, Khan Academy, linked data, Mark Zuckerberg, market fundamentalism, Marshall McLuhan, Masdar, McMansion, means of production, megacity, megastructure, Menlo Park, Minecraft, Monroe Doctrine, Network effects, new economy, offshore financial centre, oil shale / tar sands, packet switching, PageRank, pattern recognition, peak oil, performance metric, personalized medicine, Peter Thiel, phenotype, place-making, planetary scale, RAND corporation, recommendation engine, reserve currency, RFID, Sand Hill Road, self-driving car, semantic web, sharing economy, Silicon Valley, Silicon Valley ideology, Slavoj Žižek, smart cities, smart grid, smart meter, social graph, software studies, South China Sea, sovereign wealth fund, special economic zone, spectrum auction, Startup school, statistical arbitrage, Steve Jobs, Steven Levy, Stewart Brand, Stuxnet, Superbowl ad, supply-chain management, supply-chain management software, TaskRabbit, the built environment, The Chicago School, the scientific method, Torches of Freedom, transaction costs, Turing complete, Turing machine, Turing test, universal basic income, urban planning, Vernor Vinge, Washington Consensus, web application, WikiLeaks, working poor, Y Combinator

We see this play out with the absolute User's slide into an abyssal dissolution of the self when confronted with the potential totality of virtualized experiences. In response to the white noise of his infinitely refracted subjectivity, he reflects this entropy by sliding back into perceptual incoherency (or potentially stumbling toward secular hypermaterialism). It's true that the real purpose of QS is not to provide all possible information at once, but to reduce systemic complexity with summary diagrammatic accounts of one's inputs, states, and performance metrics. But adding more and more data sources to the mix and providing greater multivariate fidelity also produces other pathways of dissolution. By tracking external forces (e.g., environmental, microbial, economic) and their role in the formation of the User-subject's state and performance, the boundaries between internal and external systems are perforated and blurred. Those external variables not only act on you; in effect they are you as well, and so the profile reflecting back at the User is both more and less than a single figure (and as we'll see, sometimes those extrinsic forces live inside one's own body).

As discussed in the Interfaces chapter, the images of systemic interrelationality found in GUI and in dynamic visualizations not only diagram how platforms operate; they are the very instruments with which a User interacts with those platforms and with other Users in the first place. At stake for the redesign of the User is not only the subjective (QS) and objective (Exit) reflections of her inputs, states, and performance metrics within local/global and intrinsic/extrinsic variations, but also that the profiles of these traces are the medium through which those interactions are realized. The recursion is not only between scales of action; it is also between event and its mediation. Put differently, the composition with which (and into which) the tangled positions of Users draw their own maps (the sum of the parts that busily sum themselves) is always both more and less whole than the whole that sums their sums!

 

pages: 327 words: 103,336

Everything Is Obvious: *Once You Know the Answer by Duncan J. Watts

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

affirmative action, Albert Einstein, Amazon Mechanical Turk, Black Swan, butterfly effect, Carmen Reinhart, Cass Sunstein, clockwork universe, cognitive dissonance, collapse of Lehman Brothers, complexity theory, correlation does not imply causation, crowdsourcing, death of newspapers, discovery of DNA, East Village, easy for humans, difficult for computers, edge city, en.wikipedia.org, Erik Brynjolfsson, framing effect, Geoffrey West, Santa Fe Institute, happiness index / gross national happiness, high batting average, hindsight bias, illegal immigration, interest rate swap, invention of the printing press, invention of the telescope, invisible hand, Isaac Newton, Jane Jacobs, Jeff Bezos, Joseph Schumpeter, Kenneth Rogoff, lake wobegon effect, Long Term Capital Management, loss aversion, medical malpractice, meta analysis, meta-analysis, Milgram experiment, natural language processing, Netflix Prize, Network effects, oil shock, packet switching, pattern recognition, performance metric, phenotype, planetary scale, prediction markets, pre–internet, RAND corporation, random walk, RFID, school choice, Silicon Valley, statistical model, Steve Ballmer, Steve Jobs, Steve Wozniak, supply-chain management, The Death and Life of Great American Cities, the scientific method, The Wisdom of Crowds, too big to fail, Toyota Production System, ultimatum game, urban planning, Vincenzo Peruggia: Mona Lisa, Watson beat the top human players on Jeopardy!, X Prize

The problem is therefore not that planning of any kind is impossible, any more than prediction of any kind is impossible, but rather that certain kinds of plans can be made reliably and others can’t be, and that planners need to be able to tell the difference. 3. See Helft (2008) for a story about the Yahoo! home page overhoul. 4. See Kohavi et al. (2010) and Tang et al. (2010). 5. See Clifford (2009) for a story about startup companies using quantitative performance metrics to substitute for design instinct. 6. See Alterman (2008) for Peretti’s original description of the Mullet Strategy. See Dholakia and Vianello (2009) for a discussion of how the same approach can work for communities built around brands, and the associated tradeoff between control and insight. 7. See Howe (2008, 2006) for a general discussion of crowdsourcing. See Rice (2010) for examples of recent trends in online journalism. 8.

 

pages: 292 words: 81,699

More Joel on Software by Joel Spolsky

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

barriers to entry, Black Swan, Build a better mousetrap, business process, call centre, Danny Hillis, failed state, Firefox, George Gilder, low cost carrier, Mars Rover, Network effects, Paul Graham, performance metric, place-making, price discrimination, prisoner's dilemma, Ray Oldenburg, Sand Hill Road, Silicon Valley, slashdot, social software, Steve Ballmer, Steve Jobs, Superbowl ad, The Great Good Place, type inference, unpaid internship, wage slave, web application, Y Combinator

Or the tester agrees to report the bug “informally” to the developer before writing it up in the bug tracking system. And now nobody uses the bug tracking system. The bug count goes way down, but the number of bugs stays the same. Developers are clever this way. Whatever you try to measure, they’ll find a way to maximize, and you’ll never quite get what you want. Robert D. Austin, in his book Measuring and Managing Performance in Organizations, says there are two phases when you introduce new performance metrics. At first, you actually get what you want, because nobody has figured out how to cheat. In the second phase, you actually get something worse, as everyone figures out the trick to maximizing the thing that you’re measuring, even at the cost of ruining the company. Worse, Econ 101 managers think that they can somehow avoid this situation just by tweaking the metrics. Dr. Austin’s conclusion is that you just can’t.

 

pages: 309 words: 91,581

The Great Divergence: America's Growing Inequality Crisis and What We Can Do About It by Timothy Noah

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

autonomous vehicles, blue-collar work, Bonfire of the Vanities, Branko Milanovic, call centre, collective bargaining, computer age, corporate governance, Credit Default Swap, David Ricardo: comparative advantage, Deng Xiaoping, Erik Brynjolfsson, feminist movement, Frank Levy and Richard Murnane: The New Division of Labor, Gini coefficient, income inequality, industrial robot, invisible hand, job automation, Joseph Schumpeter, low skilled workers, lump of labour, manufacturing employment, moral hazard, oil shock, pattern recognition, performance metric, positional goods, post-industrial society, postindustrial economy, purchasing power parity, refrigerator car, rent control, Richard Feynman, Richard Feynman, Ronald Reagan, shareholder value, Silicon Valley, Simon Kuznets, Stephen Hawking, Steve Jobs, The Spirit Level, too big to fail, trickle-down economics, Tyler Cowen: Great Stagnation, union organizing, upwardly mobile, very high income, War on Poverty, We are the 99%, women in the workforce, Works Progress Administration, Yom Kippur War

But the bill exempted performance-based bonuses and stock options, on the theory that these tied chief executives’ compensation to company profitability. Corporate compensation committees responded in three ways. First, “everybody got a raise to $1 million,” Nell Minow, a corporate governance critic, told me.16 Next, corporate compensation committees, which remained bent on showering chief executives indiscriminately with cash, started inventing make-believe performance metrics. For instance, AES Corp., a firm based in Arlington, Virginia, that operates power plants, made it one of chief executive Dennis Bakke’s performance goals to ensure that AES remained a “fun” place to work. (“To some, it’s soft,” the fun-loving Bakke told Businessweek. “To me, it’s a vision of the world.”) Third, and most important, corporations showered top executives with so many stock options that this form of compensation came to account, on average, for the majority of CEO pay.

 

pages: 368 words: 96,825

Bold: How to Go Big, Create Wealth and Impact the World by Peter H. Diamandis, Steven Kotler

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

3D printing, additive manufacturing, Airbnb, Amazon Mechanical Turk, Amazon Web Services, augmented reality, autonomous vehicles, cloud computing, crowdsourcing, Daniel Kahneman / Amos Tversky, dematerialisation, deskilling, Elon Musk, en.wikipedia.org, Exxon Valdez, fear of failure, Firefox, Galaxy Zoo, Google Glasses, Google Hangouts, Google X / Alphabet X, gravity well, industrial robot, Internet of things, Jeff Bezos, John Harrison: Longitude, Jono Bacon, Just-in-time delivery, Kickstarter, Kodak vs Instagram, Law of Accelerating Returns, Lean Startup, life extension, loss aversion, Louis Pasteur, Mahatma Gandhi, Mark Zuckerberg, Mars Rover, meta analysis, meta-analysis, microbiome, minimum viable product, move fast and break things, Narrative Science, Netflix Prize, Network effects, Oculus Rift, optical character recognition, packet switching, PageRank, pattern recognition, performance metric, Peter H. Diamandis: Planetary Resources, Peter Thiel, pre–internet, Ray Kurzweil, recommendation engine, Richard Feynman, Richard Feynman, ride hailing / ride sharing, risk tolerance, rolodex, self-driving car, sentiment analysis, shareholder value, Silicon Valley, Silicon Valley startup, skunkworks, Skype, smart grid, stem cell, Stephen Hawking, Steve Jobs, Steven Levy, Stewart Brand, technoutopianism, telepresence, telepresence robot, Turing test, urban renewal, web application, X Prize, Y Combinator

You’ve probably heard about hackathons—those mysterious tournaments where coders compete to see who can hack together the best piece of software in a weekend. Well, with TopCoder, now you can have over 600,000 developers, designers, and data scientists hacking away to create solutions just for you. In fields like software and algorithm development, where there are many ways to solve a problem, having multiple submissions lets you compare performance metrics and choose the best one. Or take Gigwalk, a crowdsourced information-gathering platform that pays a small denomination to incentivize the crowd (i.e., anyone who has the Gigwalk app) to perform a simple task at a particular place and time. “Crowdsourced platforms are being quickly adopted in the retail and consumer products industry,” says Marcus Shingles, a principal with Deloitte Consulting.

 

pages: 317 words: 100,414

Superforecasting: The Art and Science of Prediction by Philip Tetlock, Dan Gardner

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Affordable Care Act / Obamacare, Any sufficiently advanced technology is indistinguishable from magic, availability heuristic, Black Swan, butterfly effect, cloud computing, cuban missile crisis, Daniel Kahneman / Amos Tversky, desegregation, Edward Lorenz: Chaos theory, forward guidance, Freestyle chess, fundamental attribution error, germ theory of disease, hindsight bias, index fund, Jane Jacobs, Jeff Bezos, Mikhail Gorbachev, Mohammed Bouazizi, Nash equilibrium, Nate Silver, obamacare, pattern recognition, performance metric, place-making, placebo effect, prediction markets, quantitative easing, random walk, randomized controlled trial, Richard Feynman, Richard Feynman, Richard Thaler, Robert Shiller, Robert Shiller, Ronald Reagan, Saturday Night Live, Silicon Valley, Skype, statistical model, stem cell, Steve Ballmer, Steve Jobs, Steven Pinker, the scientific method, The Signal and the Noise by Nate Silver, The Wisdom of Crowds, Watson beat the top human players on Jeopardy!

Elisabeth Rosenthal, “The Hype over Hospital Rankings,” New York Times, July 27, 2013. Efforts to identify “supers”—superhospitals or superteachers or super–intelligence analysts—are easy to dismiss for two reasons: (1) excellence is multidimensional and we can only imperfectly capture some dimensions (patient longevity or test results or Brier scores); (2) as soon as we anoint an official performance metric, we create incentives to game the new system by rejecting very sick patients or ejecting troublesome students. But the solution is not to abandon metrics. It is to resist overinterpreting them. 16. Thomas Friedman, “Iraq Without Saddam,” New York Times, September 1, 2002. 17. Thomas Friedman, “Is Vacation Over?,” New York Times, December 23, 2014. 18. Caleb Melby, Laura Marcinek, and Danielle Burger, “Fed Critics Say ’10 Letter Warning Inflation Still Right,” Bloomberg, October 2, 2014, http://www.bloomberg.com/news/articles/2014-10-02/fed-critics-say-10-letter-warning-inflation-still-right. 19.

 

pages: 445 words: 105,255

Radical Abundance: How a Revolution in Nanotechnology Will Change Civilization by K. Eric Drexler

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

3D printing, additive manufacturing, agricultural Revolution, Bill Joy: nanobots, Brownian motion, carbon footprint, Cass Sunstein, conceptual framework, crowdsourcing, dark matter, double helix, failed state, global supply chain, industrial robot, iterative process, Mars Rover, means of production, Menlo Park, mutually assured destruction, New Journalism, performance metric, reversible computing, Richard Feynman, Richard Feynman, Silicon Valley, South China Sea, Thomas Malthus, V2 rocket, Vannevar Bush

Participants in the ITRS can safely assume that silicon will rule for years to come, but the QISTR collaboration faced a range of fundamentally different competing approaches: quantum bits represented by the states of (pick one or more) trapped atoms in a vacuum, spin states of atoms embedded in silicon, nuclear spins in solution-phase molecules, or photons in purely photonic systems. These approaches differ radically in scalability and manufacturability as well as in the range of functions that each can implement. The QISTR document must rise to a higher level of abstraction than ITRS. Rather than focusing on performance metrics, it adopts the “DiVincenzo promise criteria” (including scalability, gate universality, decoherence times, and suitable means for input and output) and through these criteria for essential functional capabilities, QISTR then compares diverse approaches and their potential to serve as more than dead-end demos. QISTR shows how a community can explore fields that are rich in alternatives, identifying the technologies that have a genuine potential to serve a role in a functional system, setting others aside as unpromising.

 

pages: 347 words: 97,721

Only Humans Need Apply: Winners and Losers in the Age of Smart Machines by Thomas H. Davenport, Julia Kirby

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

AI winter, Andy Kessler, artificial general intelligence, asset allocation, Automated Insights, autonomous vehicles, Baxter: Rethink Robotics, business intelligence, business process, call centre, carbon-based life, Clayton Christensen, clockwork universe, conceptual framework, dark matter, David Brooks, deliberate practice, deskilling, Edward Lloyd's coffeehouse, Elon Musk, Erik Brynjolfsson, estate planning, follow your passion, Frank Levy and Richard Murnane: The New Division of Labor, Freestyle chess, game design, general-purpose programming language, Google Glasses, Hans Lippershey, haute cuisine, income inequality, index fund, industrial robot, information retrieval, intermodal, Internet of things, inventory management, Isaac Newton, job automation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, Khan Academy, knowledge worker, labor-force participation, loss aversion, Mark Zuckerberg, Narrative Science, natural language processing, Norbert Wiener, nuclear winter, pattern recognition, performance metric, Peter Thiel, precariat, quantitative trading / quantitative finance, Ray Kurzweil, Richard Feynman, Richard Feynman, risk tolerance, Robert Shiller, Robert Shiller, Rodney Brooks, Second Machine Age, self-driving car, Silicon Valley, six sigma, Skype, speech recognition, spinning jenny, statistical model, Stephen Hawking, Steve Jobs, Steve Wozniak, strong AI, superintelligent machines, supply-chain management, transaction costs, Tyler Cowen: Great Stagnation, Watson beat the top human players on Jeopardy!, Works Progress Administration, Zipcar

The important thing, for the individual learner, is to adopt some framework like this that can bring discipline to the task of focusing on a strength and building it.16 Part of any conscious attempt to build a strength should be a defensible way of measuring progress. We suspect that one reason why “left brain” skills so dominate discussions of human intelligence is simply that they are so easily assessed and compared. The yardsticks we use to measure human achievement—our “performance metrics,” to use business parlance—always push us back to believing that more hard skills training is the answer. Yet that belief constrains us to a narrow track, and the same track we have designed computers to dominate. We are limiting ourselves to running a race we have already determined we cannot win. It might even be that our attempts to have humans keep pace with machines militate against the development of other human strengths.

 

pages: 338 words: 92,465

Reskilling America: Learning to Labor in the Twenty-First Century by Katherine S. Newman, Hella Winston

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

blue-collar work, collective bargaining, deindustrialization, desegregation, factory automation, interchangeable parts, invisible hand, job-hopping, knowledge economy, low skilled workers, performance metric, reshoring, Ronald Reagan, Silicon Valley, two tier labour market, union organizing, upwardly mobile, War on Poverty, Wolfgang Streeck, working poor

“The crossover between the two sides has been excellent.”4 Even though some students need to take the MCAS multiple times before they pass—vocational schools are particularly committed to offering help and remediation for students who fail—only three seniors did not receive diplomas in 2002. Moreover, Massachusetts vocational schools do far better than comprehensive high schools on crucial performance metrics.5 The statewide dropout rate at regular/comprehensive high schools averaged 2.8 percent in 2011 but was only 1.6 percent among the thirty-nine vocational technical schools and averaged 0.9 percent among regional vocational technical schools. (Massachusetts requires every school district to offer students a career vocational technical education option, either by providing it themselves—common among the larger districts—or as part of a regional career vocational technical high school system.)

 

pages: 470 words: 109,589

Apache Solr 3 Enterprise Search Server by Unknown

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

bioinformatics, continuous integration, database schema, en.wikipedia.org, fault tolerance, Firefox, full text search, information retrieval, Internet Archive, natural language processing, performance metric, platform as a service, web application

Summary We briefly covered a wide variety of the issues that surround taking a Solr configuration that works in a development environment and getting it ready for the rigors of a production environment. Solr's modular nature and stripped down focus on search allows it to be compatible with a broad variety of deployment platforms. Solr offers a wealth of monitoring options, from log files, to HTTP request logs, to JMX options. Nonetheless, for a really robust solution, you must define what the key performance metrics are that concern you, and then implement automated solutions for tracking them. Now that we have set up our Solr server, we need to take advantage of it to build better applications. In the next chapter, we'll look at how to easily integrate Solr search through various client libraries. Chapter 9. Integrating Solr As the saying goes, if a tree falls in the woods and no one hears it, did it make a sound?

 

pages: 476 words: 132,042

What Technology Wants by Kevin Kelly

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, Alfred Russel Wallace, Buckminster Fuller, c2.com, carbon-based life, Cass Sunstein, charter city, Clayton Christensen, cloud computing, computer vision, Danny Hillis, dematerialisation, demographic transition, double entry bookkeeping, en.wikipedia.org, Exxon Valdez, George Gilder, gravity well, hive mind, Howard Rheingold, interchangeable parts, invention of air conditioning, invention of writing, Isaac Newton, Jaron Lanier, John Conway, John von Neumann, Kevin Kelly, knowledge economy, Lao Tzu, life extension, Louis Daguerre, Marshall McLuhan, megacity, meta analysis, meta-analysis, new economy, out of africa, performance metric, personalized medicine, phenotype, Picturephone, planetary scale, RAND corporation, random walk, Ray Kurzweil, recommendation engine, refrigerator car, Richard Florida, Silicon Valley, silicon-based life, Skype, speech recognition, Stephen Hawking, Steve Jobs, Stewart Brand, Ted Kaczynski, the built environment, the scientific method, Thomas Malthus, Vernor Vinge, Whole Earth Catalog, Y2K

As one exponential boom is subsumed into the next, an established technology relays its momentum to the next paradigm and carries forward an unrelenting growth. The exact unit of what is being measured can also morph from one subcurve to the next. We may start out counting pixel size, then shift to pixel density, then to pixel speed. The final performance trait may not be evident in the initial technologies and reveal itself only over the long term, perhaps as a macrotrend that continues indefinitely. In the case of computers, as the performance metric of chips is constantly recalibrated from one technological stage to the next, Moore’s Law—redefined—will never end. Compound S Curves. On this idealized chart, technological performance is measured on the vertical axis and time or engineering effort captured on the horizontal. A series of sub-S curves create an emergent larger-scale invariant slope. The slow demise of the more-transistors-per-chip trend is inevitable.

 

pages: 443 words: 51,804

Handbook of Modeling High-Frequency Data in Finance by Frederi G. Viens, Maria C. Mariani, Ionut Florescu

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

algorithmic trading, asset allocation, automated trading system, backtesting, Black-Scholes formula, Brownian motion, business process, continuous integration, corporate governance, discrete time, distributed generation, fixed income, Flash crash, housing crisis, implied volatility, incomplete markets, linear programming, mandelbrot fractal, market friction, market microstructure, martingale, Menlo Park, p-value, pattern recognition, performance metric, principal–agent problem, random walk, risk tolerance, risk/return, short selling, statistical model, stochastic process, stochastic volatility, transaction costs, value at risk, volatility smile, Wiener process

Mergers Acquis 1979. 74–82. Wuthrich B, Permunetilleke D, Leung S, Cho V, Zhang J, Lam W. Daily prediction of major stock indices from textual www data. Proceedings of the Fourth International conference on knowledge discovery and data mining, New York, August 27–31, 1998. New York: AAAI Press; 1998. p 364–368. Youngblood A, Collins T. Addressing balanced scorecard trade-off issues between performance metrics using multi-attribute utility theory. Eng Manag J 2003;15:11–17. Zavgren C. The prediction of corporate failure: the state of the art. J Account Lit 1983;2:1–37. Chapter Four Impact of Correlation Fluctuations on Securitized Structures ERIC HILLEBRAND Department of Economics, Louisiana State University, Baton Rouge, LA A M B A R N . S E N G U P TA Department of Mathematics, Louisiana State University, Baton Rouge, LA JUNYUE XU Department of Economics, Louisiana State University, Baton Rouge, LA 4.1 Introduction The financial crisis precipitated by the subprime mortgage fiasco has focused attention on the use of Gaussian copula methods in pricing and risk managing CDOs involving subprime mortgages.

 

pages: 429 words: 114,726

The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise by Nathan L. Ensmenger

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

barriers to entry, business process, Claude Shannon: information theory, computer age, deskilling, Firefox, Frederick Winslow Taylor, future of work, Grace Hopper, informal economy, information retrieval, interchangeable parts, Isaac Newton, Jacquard loom, Jacquard loom, job satisfaction, John von Neumann, knowledge worker, loose coupling, new economy, Norbert Wiener, pattern recognition, performance metric, post-industrial society, Productivity paradox, RAND corporation, Robert Gordon, sorting algorithm, Steve Jobs, Steven Levy, the market place, Thomas Kuhn: the structure of scientific revolutions, Thorstein Veblen, Turing machine, Von Neumann architecture, Y2K

One guidebook from 1969 for managers captured the essence of this adversarial approach to programmer management by describing the successful computer manager as the “one whose grasp of the job is reflected in simple work units that are in the hand[s] of simple programmers; not one who, with control lost, is held in contempt by clever programmers dangerously maintaining control on his behalf.”32 An uncritical reading of this and other similar management perspectives on the process of software development, with their confident claims about the value and efficacy of various performance metrics, development methodologies, and programming languages, might suggest that Kraft and Greenbaum are correct in their assessments. In fact, many of these methodologies do indeed represent “elaborate efforts” that “are being made to develop ways of gradually eliminating programmers, or at least reduce their average skill levels, required training, experience, and so on.”33 Their authors would be the first to admit it.

 

pages: 574 words: 164,509

Superintelligence: Paths, Dangers, Strategies by Nick Bostrom

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

agricultural Revolution, AI winter, Albert Einstein, algorithmic trading, anthropic principle, anti-communist, artificial general intelligence, autonomous vehicles, barriers to entry, bioinformatics, brain emulation, cloud computing, combinatorial explosion, computer vision, cosmological constant, dark matter, DARPA: Urban Challenge, data acquisition, delayed gratification, demographic transition, Douglas Hofstadter, Drosophila, Elon Musk, en.wikipedia.org, epigenetics, fear of failure, Flash crash, Flynn Effect, friendly AI, Gödel, Escher, Bach, income inequality, industrial robot, informal economy, information retrieval, interchangeable parts, iterative process, job automation, John von Neumann, knowledge worker, Menlo Park, meta analysis, meta-analysis, mutually assured destruction, Nash equilibrium, Netflix Prize, new economy, Norbert Wiener, NP-complete, nuclear winter, optical character recognition, pattern recognition, performance metric, phenotype, prediction markets, price stability, principal–agent problem, race to the bottom, random walk, Ray Kurzweil, recommendation engine, reversible computing, social graph, speech recognition, Stanislav Petrov, statistical model, stem cell, Stephen Hawking, strong AI, superintelligent machines, supervolcano, technological singularity, technoutopianism, The Coming Technological Singularity, The Nature of the Firm, Thomas Kuhn: the structure of scientific revolutions, transaction costs, Turing machine, Vernor Vinge, Watson beat the top human players on Jeopardy!, World Values Survey

It is difficult even to make a rough estimate—for aught we know, the efficiency savings could be five orders of magnitude, or ten, or twenty-five.15 * * * Figure 3 Supercomputer performance. In a narrow sense, “Moore’s law” refers to the observation that the number of transistors on integrated circuits have for several decades doubled approximately every two years. However, the term is often used to refer to the more general observation that many performance metrics in computing technology have followed a similarly fast exponential trend. Here we plot peak speed of the world’s fastest supercomputer as a function of time (on a logarithmic vertical scale). In recent years, growth in the serial speed of processors has stagnated, but increased use of parallelization has enabled the total number of computations performed to remain on the trend line.16 There is a further complication with these kinds of evolutionary considerations, one that makes it hard to derive from them even a very loose upper bound on the difficulty of evolving intelligence.

 

pages: 303 words: 67,891

Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms: Proceedings of the Agi Workshop 2006 by Ben Goertzel, Pei Wang

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

AI winter, artificial general intelligence, bioinformatics, brain emulation, combinatorial explosion, complexity theory, computer vision, conceptual framework, correlation coefficient, epigenetics, friendly AI, information retrieval, Isaac Newton, John Conway, Loebner Prize, Menlo Park, natural language processing, Occam's razor, p-value, pattern recognition, performance metric, Ray Kurzweil, Rodney Brooks, semantic web, statistical model, strong AI, theory of mind, traveling salesman, Turing machine, Turing test, Von Neumann architecture, Y2K

Evaluating intelligence: A computational semiotics perspective. In IEEE International conference on systems, man and cybernetics, pages 2080–2085, Nashville, Tenessee, USA, 2000. [30] R. R. Gudwin. Evaluating intelligence: A computational semiotics perspective. In IEEE International conference on systems, man and cybernetics, pages 2080–2085, Nashville, Tenessee, USA, 2000. [30] J. Horst. A native intelligence metric for artificial systems. In Performance Metrics for Intelligent Systems Workshop, Gaithersburg, MD, USA, 2002. [31] D. Lenat and E. Feigenbaum. On the thresholds of knowledge. Artificial Intelligence, 47:185–250, 1991. 24 S. Legg and M. Hutter / A Collection of Definitions of Intelligence [32] H. Masum, S. Christensen, and F. Oppacher. The Turing ratio: Metrics for open-ended tasks. In GECCO 2002: Proceedings of the Genetic and Evolutionary Computation Conference, pages 973–980, New York, 2002.

 

pages: 489 words: 148,885

Accelerando by Stross, Charles

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

call centre, carbon-based life, cellular automata, cognitive dissonance, Conway's Game of Life, dark matter, dumpster diving, Extropian, finite state, Flynn Effect, glass ceiling, gravity well, John von Neumann, knapsack problem, Kuiper Belt, Magellanic Cloud, mandelbrot fractal, market bubble, means of production, packet switching, performance metric, phenotype, planetary scale, Pluto: dwarf planet, reversible computing, Richard Stallman, SETI@home, Silicon Valley, Singularitarianism, slashdot, South China Sea, stem cell, technological singularity, telepresence, The Chicago School, theory of mind, Turing complete, Turing machine, Turing test, upwardly mobile, Vernor Vinge, Von Neumann architecture, web of trust, Y2K

He laughs, briefly. "I used to have an idea a second. Now it's maybe one a year. I'm just a melancholy old birdbrain, me." "Yes, but you know the old saying? The fox has many ideas – the hedgehog has only one, but it's a big idea." "So tell me, what is my big idea?" Manfred leans forward, one elbow on the table, one eye focused on inner space as a hot-burning thread of consciousness barks psephological performance metrics at him, analysing the game ahead. "Where do you think I'm going?" "I think –" Annette breaks off suddenly, staring past his shoulder. Privacy slips, and for a frozen moment Manfred glances round in mild horror and sees thirty or forty other guests in the crowded garden, elbows rubbing, voices raised above the background chatter: "Gianni!" She beams widely as she stands up. "What a surprise!

 

pages: 496 words: 154,363

I'm Feeling Lucky: The Confessions of Google Employee Number 59 by Douglas Edwards

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, AltaVista, Any sufficiently advanced technology is indistinguishable from magic, barriers to entry, book scanning, Build a better mousetrap, Burning Man, business intelligence, call centre, crowdsourcing, don't be evil, Elon Musk, fault tolerance, Googley, gravity well, invisible hand, Jeff Bezos, job-hopping, Menlo Park, microcredit, music of the spheres, Network effects, P = NP, PageRank, performance metric, pets.com, Ralph Nader, risk tolerance, second-price auction, side project, Silicon Valley, Silicon Valley startup, slashdot, stem cell, Superbowl ad, Y2K

It would be indiscreet for me to go into the details of people's private lives beyond what the participants have acknowledged publicly—and it would also be largely irrelevant, since office relationships had little effect on the course of the company. Usually, anyway. I did detect the tidal force of one pairing tugging at my ability to get my job done. Larry and Sergey's insistance on seeing performance metrics for marketing redoubled with the addition of our ad buy on Yahoo. They began a drumbeat of demands for better measurement of our customer-acquisition techniques. What about the promotional text on our homepage? Which messages converted the most newbies to regular users? Testimonials? Promises? Comparisons? How many ads did they click? How many searches did they do? The only way to answer these questions was to generate the homepage dynamically—essentially to implement code that would give us the ability to deliver variant versions of the homepage to users who came to our site.

 

pages: 478 words: 126,416

Other People's Money: Masters of the Universe or Servants of the People? by John Kay

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Affordable Care Act / Obamacare, asset-backed security, bank run, banking crisis, Basel III, Bernie Madoff, Big bang: deregulation of the City of London, bitcoin, Black Swan, Bonfire of the Vanities, bonus culture, Bretton Woods, call centre, capital asset pricing model, Capital in the Twenty-First Century by Thomas Piketty, cognitive dissonance, corporate governance, Credit Default Swap, cross-subsidies, dematerialisation, diversification, diversified portfolio, Edward Lloyd's coffeehouse, Elon Musk, Eugene Fama: efficient market hypothesis, eurozone crisis, financial innovation, financial intermediation, fixed income, Flash crash, forward guidance, Fractional reserve banking, full employment, George Akerlof, German hyperinflation, Goldman Sachs: Vampire Squid, Growth in a Time of Debt, income inequality, index fund, inflation targeting, interest rate derivative, interest rate swap, invention of the wheel, Irish property bubble, Isaac Newton, London Whale, Long Term Capital Management, loose coupling, low cost carrier, M-Pesa, market design, millennium bug, mittelstand, moral hazard, mortgage debt, new economy, Nick Leeson, Northern Rock, obamacare, Occupy movement, offshore financial centre, oil shock, passive investing, peer-to-peer lending, performance metric, Peter Thiel, Piper Alpha, Ponzi scheme, price mechanism, purchasing power parity, quantitative easing, quantitative trading / quantitative finance, railway mania, Ralph Waldo Emerson, random walk, regulatory arbitrage, Renaissance Technologies, rent control, Richard Feynman, risk tolerance, road to serfdom, Robert Shiller, Robert Shiller, Ronald Reagan, Schrödinger's Cat, shareholder value, Silicon Valley, Simon Kuznets, South Sea Bubble, sovereign wealth fund, Spread Networks laid a new fibre optics cable between New York and Chicago, Steve Jobs, Steve Wozniak, The Great Moderation, The Market for Lemons, the market place, The Myth of the Rational Market, the payments system, The Wealth of Nations by Adam Smith, The Wisdom of Crowds, Tobin tax, too big to fail, transaction costs, tulip mania, Upton Sinclair, Vanguard fund, Washington Consensus, We are the 99%, Yom Kippur War

Even as the thinly capitalised Deutsche Bank was benefiting from state guarantees of its liabilities, it was buying back its own shares to reduce its capital base. And whatever return on equity was claimed by the financial officers of Deutsche Bank, the shareholder returns told a different, and more enlightening, story: the average annual total return on its shares (in US dollars with dividends re-invested) over the period May 2002 to May 2012 (Ackermann’s tenure as chief executive of the bank) was around minus 2 per cent. RoE is an inappropriate performance metric for any company, but especially for a bank, and it is bizarre that its use should have been championed by people who profess particular expertise in financial and risk management. Banks still proclaim return on equity targets: less ambitious, but nevertheless fanciful. In recent discussions of the implications of imposing more extensive capital requirements on banks, a figure of 15 per cent has been proposed and endorsed as a measure of the cost of equity capital to conglomerate banks.28 If these companies were really likely to earn 15 per cent rates of return for the benefit of their shareholders, there would be long queues of investors seeking these attractive returns.

 

pages: 497 words: 130,817

Pedigree: How Elite Students Get Elite Jobs by Lauren A. Rivera

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

affirmative action, availability heuristic, barriers to entry, Donald Trump, fundamental attribution error, glass ceiling, income inequality, job satisfaction, knowledge economy, meta analysis, meta-analysis, new economy, performance metric, profit maximization, profit motive, school choice, Silicon Valley, Silicon Valley startup, The Wisdom of Crowds, unpaid internship, women in the workforce, young professional

Although cultural similarity can facilitate trust and communication, it often does so at the expense of group effectiveness and high-quality team decision making.39 Furthermore, the emphasis on super-elite schools and the lack of systematic structures in place to reduce the use of gender and race stereotypes in candidate evaluation push qualified women and minorities out of the pool in favor of males and whites. Such patterns could adversely affect organizational performance not only because of the relationship between demographic diversity and higher-quality decision making but also because gender and racial diversity have become key performance metrics that clients and future job candidates use to evaluate firm quality and status. Likewise, the subjective nature of the hiring process can leave employers open to costly gender and racial discrimination lawsuits. EPS firms have faced such suits in the past and continue to face them in the present. Finally, although screening on socioeconomic status may enhance a firm’s status and facilitate client comfort, it excludes individuals who have critical skills relevant for successful job performance.

 

pages: 382 words: 120,064

Bank 3.0: Why Banking Is No Longer Somewhere You Go but Something You Do by Brett King

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

3D printing, additive manufacturing, Albert Einstein, Amazon Web Services, Any sufficiently advanced technology is indistinguishable from magic, asset-backed security, augmented reality, barriers to entry, bitcoin, bounce rate, business intelligence, business process, business process outsourcing, call centre, capital controls, citizen journalism, Clayton Christensen, cloud computing, credit crunch, crowdsourcing, disintermediation, en.wikipedia.org, George Gilder, Google Glasses, high net worth, I think there is a world market for maybe five computers, Infrastructure as a Service, invention of the printing press, Jeff Bezos, jimmy wales, London Interbank Offered Rate, M-Pesa, Mark Zuckerberg, mass affluent, microcredit, mobile money, more computing power than Apollo, Northern Rock, Occupy movement, optical character recognition, performance metric, platform as a service, QWERTY keyboard, Ray Kurzweil, recommendation engine, RFID, risk tolerance, self-driving car, Skype, speech recognition, stem cell, telepresence, Tim Cook: Apple, transaction costs, underbanked, web application

There are, however, two sides of Big Data that are consistently discussed in the industry as having strong business benefit. The first is the ability to make better trading decisions, and the second, the ability to connect with customers in the retail environment. In a trading environment, the financial benefits of Big Data appear extremely compelling. The ability, for example, to understand trading cost analytics, capacity of a trade, performance metrics of traders, etc. could be massively profitable to a trading business. How do you create alpha opportunities to outperform, based on that data? The ability to create algorithms that forecast prices in the near term and then make trading decisions accordingly is what will likely drive the profits of banking and trading firms in the near term. Speed of execution is, of course, another key platform capability to leverage this learning and has spawned a raft of low-latency platform investments designed to capture the value of these so-called “alpha” data points.

 

pages: 413 words: 117,782

What Happened to Goldman Sachs: An Insider's Story of Organizational Drift and Its Unintended Consequences by Steven G. Mandis

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

algorithmic trading, Berlin Wall, bonus culture, BRICs, business process, collapse of Lehman Brothers, collateralized debt obligation, complexity theory, corporate governance, Credit Default Swap, credit default swaps / collateralized debt obligations, crony capitalism, disintermediation, diversification, Emanuel Derman, financial innovation, fixed income, friendly fire, Goldman Sachs: Vampire Squid, high net worth, housing crisis, London Whale, Long Term Capital Management, merger arbitrage, new economy, passive investing, performance metric, risk tolerance, Ronald Reagan, Saturday Night Live, shareholder value, short selling, sovereign wealth fund, The Nature of the Firm, too big to fail, value at risk

Although my new bosses were smart, sophisticated, and supportive, and as demanding as my investment banking bosses, there was an intense focus on measuring relatively short-term results because they were measurable. Our performance as investors was marked to market every day, meaning that the value of the trades we made was calculated every day, so there was total transparency about how much money we’d made or lost for the firm each and every day. This isn’t done in investment banking, although each year new performance metrics were being added by the time I left for FICC. Typically in banking, relationships take a long time to develop and pay off. A bad day in banking may mean that, after years of meetings and presentations performed for free, a client didn’t select you to execute a transaction. You could offer excuses: “The other bank offered to loan them money,” “They were willing to do it much cheaper,” and so on.

 

pages: 518 words: 147,036

The Fissured Workplace by David Weil

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

accounting loophole / creative accounting, affirmative action, Affordable Care Act / Obamacare, banking crisis, barriers to entry, business process, call centre, Carmen Reinhart, Cass Sunstein, Clayton Christensen, clean water, collective bargaining, corporate governance, Daniel Kahneman / Amos Tversky, David Ricardo: comparative advantage, declining real wages, employer provided health coverage, Frank Levy and Richard Murnane: The New Division of Labor, George Akerlof, global supply chain, global value chain, hiring and firing, income inequality, intermodal, inventory management, Jane Jacobs, Kenneth Rogoff, law of one price, loss aversion, low skilled workers, minimum wage unemployment, moral hazard, Network effects, new economy, occupational segregation, performance metric, pre–internet, price discrimination, principal–agent problem, Rana Plaza, Richard Florida, Richard Thaler, Ronald Coase, shareholder value, Silicon Valley, statistical model, Steve Jobs, supply-chain management, The Death and Life of Great American Cities, The Nature of the Firm, transaction costs, ultimatum game, union organizing, women in the workforce, Y2K, yield management

It also makes clear that the relationship between the two organizations is a principal/vendor one, where “PWV will, at all times, remain the sole and exclusive … employer of any personnel utilized in providing the Services and the Principal of any subcontractor it may elect to utilize.”10 This and other provisions regarding indemnification attempt to establish market-relation distance between the parties. However, other features of the agreement imply a fuzzier boundary between the responsibilities of the two companies. Section 2 describes in considerable detail the standards to which Schneider holds PWV and the mechanisms it will use to monitor compliance with them. Section 2.06, for example, describes a variety of audit-based performance metrics that PWV will periodically provide to Schneider (at no cost to the latter) regarding average number of cases loaded per hour; number of trailers loaded per week; trailer loading accuracy (a critical dimension for Walmart); and average cubic meters packed in trailers per week. These measures serve as the basis of compensation and for ongoing evaluation of PWV’s performance as a contractor.

 

pages: 515 words: 126,820

Blockchain Revolution: How the Technology Behind Bitcoin Is Changing Money, Business, and the World by Don Tapscott, Alex Tapscott

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Airbnb, altcoin, asset-backed security, autonomous vehicles, barriers to entry, bitcoin, blockchain, Bretton Woods, business process, Capital in the Twenty-First Century by Thomas Piketty, carbon footprint, clean water, cloud computing, cognitive dissonance, corporate governance, corporate social responsibility, Credit Default Swap, crowdsourcing, cryptocurrency, disintermediation, distributed ledger, Donald Trump, double entry bookkeeping, Edward Snowden, Elon Musk, Erik Brynjolfsson, ethereum blockchain, failed state, fiat currency, financial innovation, Firefox, first square of the chessboard, first square of the chessboard / second half of the chessboard, future of work, Galaxy Zoo, George Gilder, glass ceiling, Google bus, Hernando de Soto, income inequality, informal economy, interest rate swap, Internet of things, Jeff Bezos, jimmy wales, Kickstarter, knowledge worker, Kodak vs Instagram, Lean Startup, litecoin, Lyft, M-Pesa, Mark Zuckerberg, Marshall McLuhan, means of production, microcredit, mobile money, Network effects, new economy, Oculus Rift, pattern recognition, peer-to-peer lending, performance metric, Peter Thiel, planetary scale, Ponzi scheme, prediction markets, price mechanism, Productivity paradox, quantitative easing, ransomware, Ray Kurzweil, renewable energy credits, rent-seeking, ride hailing / ride sharing, Ronald Coase, Ronald Reagan, Satoshi Nakamoto, Second Machine Age, seigniorage, self-driving car, sharing economy, Silicon Valley, Skype, smart contracts, smart grid, social graph, social software, Stephen Hawking, Steve Jobs, Steve Wozniak, Stewart Brand, supply-chain management, TaskRabbit, The Fortune at the Bottom of the Pyramid, The Nature of the Firm, The Wisdom of Crowds, transaction costs, Turing complete, Turing test, Uber and Lyft, unbanked and underbanked, underbanked, unorthodox policies, X Prize, Y2K, Zipcar

When they do the job as specified, they are instantly paid—perhaps not biweekly but daily, hourly, or in microseconds. As the entity wouldn’t necessarily have an anthropomorphic body, employees might not even know that algorithms are managing them. But they would know the rules and norms for good behavior. Given that the smart contract could encode the collective knowledge of management science and that their assignments and performance metrics would be transparent, people could love to work. Customers would provide feedback that the enterprise would apply dispassionately and instantly to correct course. Shareholders would receive dividends, perhaps frequently, as real-time accounting would obviate the need for year-end reports. The organization would perform all these activities under the guidance and incorruptible business rules that are as transparent as the open source software that its founders used to set it in motion.

 

pages: 461 words: 128,421

The Myth of the Rational Market: A History of Risk, Reward, and Delusion on Wall Street by Justin Fox

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, Andrei Shleifer, asset allocation, asset-backed security, bank run, Benoit Mandelbrot, Black-Scholes formula, Bretton Woods, Brownian motion, capital asset pricing model, card file, Cass Sunstein, collateralized debt obligation, complexity theory, corporate governance, Credit Default Swap, credit default swaps / collateralized debt obligations, Daniel Kahneman / Amos Tversky, David Ricardo: comparative advantage, discovery of the americas, diversification, diversified portfolio, Edward Glaeser, endowment effect, Eugene Fama: efficient market hypothesis, experimental economics, financial innovation, Financial Instability Hypothesis, floating exchange rates, George Akerlof, Henri Poincaré, Hyman Minsky, implied volatility, impulse control, index arbitrage, index card, index fund, invisible hand, Isaac Newton, John Nash: game theory, John von Neumann, joint-stock company, Joseph Schumpeter, libertarian paternalism, linear programming, Long Term Capital Management, Louis Bachelier, mandelbrot fractal, market bubble, market design, New Journalism, Nikolai Kondratiev, Paul Lévy, pension reform, performance metric, Ponzi scheme, prediction markets, pushing on a string, quantitative trading / quantitative finance, Ralph Nader, RAND corporation, random walk, Richard Thaler, risk/return, road to serfdom, Robert Shiller, Robert Shiller, rolodex, Ronald Reagan, shareholder value, Sharpe ratio, short selling, side project, Silicon Valley, South Sea Bubble, statistical model, The Chicago School, The Myth of the Rational Market, The Predators' Ball, the scientific method, The Wealth of Nations by Adam Smith, The Wisdom of Crowds, Thomas Kuhn: the structure of scientific revolutions, Thomas L Friedman, Thorstein Veblen, Tobin tax, transaction costs, tulip mania, value at risk, Vanguard fund, volatility smile, Yogi Berra

Gerd Gigerenzer, Zeno Swijtink, Theodore Porter, Lorraine Daston, John Beatty, Lorenz Krüger, The Empire of Chance: How Probability Changed Science and Everyday Life (Cambridge: Cambridge University Press, 1989), 3–4. 23. A crucial intermediate step between Markowitz and Treynor was James Tobin, Liquidity Preference as Behavior Towards Risk,” Review of Economic Studies 25, no. 1 (1958): 65–86. 24. Jack L. Treynor, “Towards a Theory of Market Value of Risky Assets,” in Asset Pricing and Portfolio Performance; Models, Strategy and Performance Metrics, Robert A. Korajczk, ed. (London: Risk Books, 1999). 25. William F. Sharpe, “A Simplified Model for Portfolio Analysis,” Management Science (Jan. 1963): 281. 26. William F. Sharpe, “Capital Asset Prices: A Theory of Market Equilibrium Under Conditions of Risk,” Journal of Finance (Sept. 1964): 425–42. 27. John Lintner, “The Valuation of Risk Assets and the Selection of Risky Investments in Stock Portfolios and Capital Budgets,” Review of Economics and Statistics (Feb. 1965): 13–37.

 

pages: 496 words: 174,084

Masterminds of Programming: Conversations With the Creators of Major Programming Languages by Federico Biancuzzi, Shane Warden

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

business intelligence, business process, cellular automata, cloud computing, complexity theory, conceptual framework, continuous integration, data acquisition, domain-specific language, Douglas Hofstadter, Fellow of the Royal Society, finite state, Firefox, follow your passion, Frank Gehry, general-purpose programming language, HyperCard, information retrieval, iterative process, John von Neumann, linear programming, loose coupling, Mars Rover, millennium bug, NP-complete, Paul Graham, performance metric, QWERTY keyboard, RAND corporation, randomized controlled trial, Renaissance Technologies, Silicon Valley, slashdot, software as a service, software patent, sorting algorithm, Steve Jobs, traveling salesman, Turing complete, type inference, Valgrind, Von Neumann architecture, web application

Lots of discussions go on between individuals or between groups of the form of “I couldn’t do this work because you didn’t give me the requirements yet,” or “We need to have a group of people that goes out and gathers the requirements for this new system.” The term is simply too imprecise. You need to have more precise terms as an alternative. On a big project that I’ve been involved in we have imposed a requirements tax. If anybody uses the word “requirements” standalone, they have to add $2.00 to the entertainment fund. If they want to talk about use cases, or if they want to talk about story cards, or they want to talk about performance metrics, or they want to talk about business cases or business process models, those are all acceptable terms. They don’t incur a tax, because now if you say, “I need to have the use cases or the functional specification, or a mockup of the application that needs to be developed,” that’s a precise request. I see projects getting into trouble when they don’t get that part right. Writing the code doesn’t seem like the hard part anymore.

 

pages: 1,085 words: 219,144

Solr in Action by Trey Grainger, Timothy Potter

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

business intelligence, cloud computing, conceptual framework, crowdsourcing, data acquisition, en.wikipedia.org, failed state, fault tolerance, finite state, full text search, glass ceiling, information retrieval, natural language processing, performance metric, premature optimization, recommendation engine, web application

If you’re not using the default Jetty configuration (java -jar start.jar), you’ll need to separately configure your Java servlet container or bootstrap settings to ensure that these extra JVM parameters are enabled. Most modern application performance monitoring tools are able to read JMX beans and provide long-term collection and graphing of metrics, often along with monitoring and alerting when the numbers deviate significantly beyond performance metrics you can set. In addition, several application performance monitoring tools—including cloud-based ones—now exist with direct support and understanding of Solr’s internals. A simple web search for Solr application performance monitoring will help you find a long list of companies interested in helping you further monitor the performance of your Solr cluster. 12.9.5. Solr logs As with most applications, logs provide the richest source of information about the state of your cluster at any time.

 

Data Mining: Concepts and Techniques: Concepts and Techniques by Jiawei Han, Micheline Kamber, Jian Pei

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

bioinformatics, business intelligence, business process, Claude Shannon: information theory, cloud computing, computer vision, correlation coefficient, cyber-physical system, database schema, discrete time, distributed generation, finite state, information retrieval, iterative process, knowledge worker, linked data, natural language processing, Netflix Prize, Occam's razor, pattern recognition, performance metric, phenotype, random walk, recommendation engine, RFID, semantic web, sentiment analysis, speech recognition, statistical model, stochastic process, supply-chain management, text mining, thinkpad, web application

■ Access patterns: The access patterns of an OLTP system consist mainly of short, atomic transactions. Such a system requires concurrency control and recovery mechanisms. However, accesses to OLAP systems are mostly read-only operations (because most data warehouses store historic rather than up-to-date information), although many could be complex queries. Other features that distinguish between OLTP and OLAP systems include database size, frequency of operations, and performance metrics. These are summarized in Table 4.1. Table 4.1 Comparison of OLTP and OLAP Systems Note: Table is partially based on Chaudhuri and Dayal [CD97]. FeatureOLTPOLAP Characteristic operational processing informational processing Orientation transaction analysis User clerk, DBA, database professional knowledge worker (e.g., manager, executive, analyst) Function day-to-day operations long-term informational requirements decision support DB design ER-based, application-oriented star/snowflake, subject-oriented Data current, guaranteed up-to-date historic, accuracy maintainedover time Summarization primitive, highly detailed summarized, consolidated View detailed, flat relational summarized, multidimensional Unit of work short, simple transaction complex query Access read/write mostly read Focus data in information out Operations index/hash on primary key lots of scans Number of records accessed tens millions Number of users thousands hundreds DB size GB to high-order GB ≥ TB Priority high performance, high availability high flexibility, end-user autonomy Metric transaction throughput query throughput, response time 4.1.3.