17 results back to index

**
The Golden Ticket: P, NP, and the Search for the Impossible
** by
Lance Fortnow

Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr

Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, Andrew Wiles, Claude Shannon: information theory, cloud computing, complexity theory, Donald Knuth, Erdős number, four colour theorem, Gerolamo Cardano, Isaac Newton, John von Neumann, linear programming, new economy, NP-complete, Occam's razor, P = NP, Paul Erdős, Richard Feynman, Richard Feynman, Rubik’s Cube, smart grid, Stephen Hawking, traveling salesman, Turing machine, Turing test, Watson beat the top human players on Jeopardy!, William of Occam

Beyond Karp After Karp’s paper, an industry in computer science arose showing the NP-completeness of a large variety of search problems. Over the next several years, many professors and grad students took a number of known search problems (as well as some new ones) and showed them to be NP-complete. A classic 1979 book* lists over three hundred major NP-complete problems. NP-complete problems continually arise in computer science, physics, biology, economics, and many other areas that reach that pinnacle of hardness. A Google Scholar search on “NP-Complete” returns over 138,000 scientific articles from 1972 to 2011, nearly 10,000 in 2011 alone. We can’t list all of them here, but let me give you a flavor of some of them. Dominating Set Are there fifty people in Frenemy such that everyone else is friends with at least one of them? NP-complete. Partition into Triangles The dorm rooms of the Frenemy Institute can each hold three students.

…

Now a computer search will start to take significant time, and 100 × 100 games can defeat today’s fastest machines. Large Sudoku games are NP-complete. Think you are good at Sudoku? If you can reliably solve large puzzles, then you can solve satisfiability, traveling salesman problems, and thousands of other NP-complete problems. Sudoku is hardly the only NP-complete solitaire game. Consider the Minesweeper game built into Microsoft Windows. Figure 4-5. Minesweeper. Each number in a square tells how many mines are within one square (horizontally, vertically, or diagonally) from that square. Click on a square and you will reveal that number. Set a flag if you think there are bombs there. Click on a bomb and you lose the game. Determining whether one can win a large Minesweeper game is also NP-complete. Here are the remaining bombs in the above game. Figure 4-6.

…

Knuth got many write-in suggestions, some straightforward, like “intractable” and “obstinate,” and others a bit more tongue-in-cheek, like “hard-boiled” (in honor of Cook) and “hard-ass” (“hard as satisfiability”). The winning write-in vote was for “NP-complete,” which was proposed by several people at Bell Labs in New Jersey after considerable discussion among the researchers there. The word “complete” comes from mathematical logic, where a set of facts is complete if it can explain all the true statements in some logical system. Analogously, “NP-complete” means those problems in NP powerful enough that they can be used to solve any other problem in NP. Knuth was not entirely happy with this choice but was willing to live with it. He truly wanted a single English world that captured the intuitive meaning of hard search problems, a term for the masses. In a 1974 wrap-up of his survey, Knuth wrote “NP-complete actually smacks of being a little too technical for a mass audience, but it’s not so bad as to be unusable.”

**
Algorithms Unlocked
** by
Thomas H. Cormen

Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr

bioinformatics, Donald Knuth, knapsack problem, NP-complete, optical character recognition, Silicon Valley, sorting algorithm, traveling salesman

—is the question that has perplexed computer scientists for all these years. We often call it the “P D NP‹ problem.” The NP-complete problems are the “hardest” in NP. Informally, a problem is NP-complete if it satisfies two conditions: (1) it’s in NP and (2) if a polynomial-time algorithm exists for the problem, then there is a way to convert every problem in NP into this problem in such a way as to solve them all in polynomial time. If a polynomial-time algorithm exists for any NP-complete problem—that is, if any NP-complete problem is in P—then P D NP. Because NP-complete problems are the hardest in NP, if it turns out that any problem in NP is not polynomial-time solvable, then none of the NP-complete problems are. A problem is NP-hard if it satisfies the second condition for NP-completeness but may or may not be in NP. Here’s a handy list of the pertinent definitions: P: problems solvable in polynomial time, i.e., we can solve the problem in time polynomial in the size of the input to the problem.

…

To take the frustration to a whole new level, here’s the most amazing fact: if there were an algorithm that ran in O.nc / time for any of these problems, where c is a constant, then there would be an algorithm that ran in O.nc / time for all of these problems. We call these problems NP-complete. An algorithm that runs in time O.nc / on an input of size n, where c is a constant, is a polynomial-time algorithm, so called because nc with some coefficient would be the most significant term in the running time. We know of no polynomial-time algorithm for any NP-complete problem, but nobody has proven that it’s impossible to solve some NP-complete problem in polynomial time. And there’s even more frustration: many NP-complete problems are almost the same as problems that we know how to solve in polynomial time. Just a small tweak separates them. For example, recall from Chapter 6 that the Bellman-Ford algorithm finds shortest paths from a single source in a directed graph, even if the graph has negative-weight edges, in ‚.nm/ time, where the graph has n vertices and m edges.

…

But if we ask whether a connected, undirected graph has a hamiltonian cycle, that’s NP-complete. Notice that the question is not “what is the order of vertices on a hamiltonian cycle in this graph?” but just the more basic “yes or no: is it possible to construct a hamiltonian cycle on this graph?” NP-complete problems come up surprisingly often, which is why I’m including material on them in this book. If you are trying to find a polynomial-time algorithm for a problem that turns out to be NPcomplete, you are likely to be in for a big helping of disappointment. (But see the section on perspective, pages 208–211.) The concept of NP-complete problems has been around since the early 1970s, and people were trying to solve problems that turned out to be NP-complete (such as the traveling-salesman problem) well before then.

**
The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World
** by
Pedro Domingos

Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr

3D printing, Albert Einstein, Amazon Mechanical Turk, Arthur Eddington, basic income, Bayesian statistics, Benoit Mandelbrot, bioinformatics, Black Swan, Brownian motion, cellular automata, Claude Shannon: information theory, combinatorial explosion, computer vision, constrained optimization, correlation does not imply causation, creative destruction, crowdsourcing, Danny Hillis, data is the new oil, double helix, Douglas Hofstadter, Erik Brynjolfsson, experimental subject, Filter Bubble, future of work, global village, Google Glasses, Gödel, Escher, Bach, information retrieval, job automation, John Markoff, John Snow's cholera map, John von Neumann, Joseph Schumpeter, Kevin Kelly, lone genius, mandelbrot fractal, Mark Zuckerberg, Moneyball by Michael Lewis explains big data, Narrative Science, Nate Silver, natural language processing, Netflix Prize, Network effects, NP-complete, off grid, P = NP, PageRank, pattern recognition, phenotype, planetary scale, pre–internet, random walk, Ray Kurzweil, recommendation engine, Richard Feynman, Richard Feynman, Second Machine Age, self-driving car, Silicon Valley, speech recognition, statistical model, Stephen Hawking, Steven Levy, Steven Pinker, superintelligent machines, the scientific method, The Signal and the Noise by Nate Silver, theory of mind, Thomas Bayes, transaction costs, Turing machine, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!, white flight, zero-sum game

Because of NP-completeness, all it takes to answer it is to prove that one NP-complete problem is efficiently solvable (or not). NP is not the hardest class of problems in computer science, but it’s arguably the hardest “realistic” class: if you can’t even check a problem’s solution before the universe ends, what’s the point of trying to solve it? Humans are good at solving NP problems approximately, and conversely, problems that we find interesting (like Tetris) often have an “NP-ness” about them. One definition of artificial intelligence is that it consists of finding heuristic solutions to NP-complete problems. Often, we do this by reducing them to satisfiability, the canonical NP-complete problem: Can a given logical formula ever be true, or is it self-contradictory? If we invent a learner that can learn to solve satisfiability, it has a good claim to being the Master Algorithm.

…

Figuring out how proteins fold into their characteristic shapes; reconstructing the evolutionary history of a set of species from their DNA; proving theorems in propositional logic; detecting arbitrage opportunities in markets with transaction costs; inferring a three-dimensional shape from two-dimensional views; compressing data on a disk; forming a stable coalition in politics; modeling turbulence in sheared flows; finding the safest portfolio of investments with a given return, the shortest route to visit a set of cities, the best layout of components on a microchip, the best placement of sensors in an ecosystem, or the lowest energy state of a spin glass; scheduling flights, classes, and factory jobs; optimizing resource allocation, urban traffic flow, social welfare, and (most important) your Tetris score: these are all NP-complete problems, meaning that if you can efficiently solve one of them you can efficiently solve all problems in the class NP, including each other. Who would have guessed that all these problems, superficially so different, are really the same? But if they are, it makes sense that one algorithm could learn to solve all of them (or, more precisely, all efficiently solvable instances). P and NP are the two most important classes of problems in computer science. (The names are not very mnemonic, unfortunately.) A problem is in P if we can solve it efficiently, and it’s in NP if we can efficiently check its solution. The famous P = NP question is whether every efficiently checkable problem is also efficiently solvable. Because of NP-completeness, all it takes to answer it is to prove that one NP-complete problem is efficiently solvable (or not).

…

If there’s a limit to what Bayes can learn, we haven’t found it yet. The argument from computer science When I was a senior in college, I wasted a summer playing Tetris, a highly addictive video game where variously shaped pieces fall from above and which you try to pack as closely together as you can; the game is over when the pile of pieces reaches the top of the screen. Little did I know that this was my introduction to NP-completeness, the most important problem in theoretical computer science. Turns out that, far from an idle pursuit, mastering Tetris—really mastering it—is one of the most useful things you could ever do. If you can solve Tetris, you can solve thousands of the hardest and most important problems in science, technology, and management—all in one fell swoop. That’s because at heart they are all the same problem.

**
Algorithms to Live By: The Computer Science of Human Decisions
** by
Brian Christian,
Tom Griffiths

Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr

4chan, Ada Lovelace, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, algorithmic trading, anthropic principle, asset allocation, autonomous vehicles, Bayesian statistics, Berlin Wall, Bill Duvall, bitcoin, Community Supported Agriculture, complexity theory, constrained optimization, cosmological principle, cryptocurrency, Danny Hillis, David Heinemeier Hansson, delayed gratification, dematerialisation, diversification, Donald Knuth, double helix, Elon Musk, fault tolerance, Fellow of the Royal Society, Firefox, first-price auction, Flash crash, Frederick Winslow Taylor, George Akerlof, global supply chain, Google Chrome, Henri Poincaré, information retrieval, Internet Archive, Jeff Bezos, John Nash: game theory, John von Neumann, knapsack problem, Lao Tzu, Leonard Kleinrock, linear programming, martingale, Nash equilibrium, natural language processing, NP-complete, P = NP, packet switching, Pierre-Simon Laplace, prediction markets, race to the bottom, RAND corporation, RFC: Request For Comment, Robert X Cringely, sealed-bid auction, second-price auction, self-driving car, Silicon Valley, Skype, sorting algorithm, spectrum auction, Steve Jobs, stochastic process, Thomas Bayes, Thomas Malthus, traveling salesman, Turing machine, urban planning, Vickrey auction, Vilfredo Pareto, Walter Mischel, Y Combinator, zero-sum game

The intractable scheduling problems that Eugene Lawler encountered in chapter 5 fall into this category. An NP-hard problem that is itself in NP is known as “NP-complete.” See Karp, “Reducibility Among Combinatorial Problems,” for the classic result showing that a version of the traveling salesman problem is NP-complete, and Fortnow, The Golden Ticket: P, NP, and the Search for the Impossible, for an accessible introduction to P and NP. most computer scientists believe that there aren’t any: In a 2002 survey of one hundred leading theoretical computer scientists, sixty-one thought P ≠ NP and only nine thought P = NP (Gasarch, “The P =? NP Poll”). While proving P = NP could be done by exhibiting a polynomial-time algorithm for an NP-complete problem, proving P ≠ NP requires making complex arguments about the limits of polynomial-time algorithms, and there wasn’t much agreement among the people surveyed about exactly what kind of mathematics will be needed to solve this problem.

…

What’s more, many other optimization problems: This includes versions of vertex cover and set cover—two problems identified as belonging to NP in Karp, “Reducibility Among Combinatorial Problems,” where twenty-one problems were famously shown to be in this set. By the end of the 1970s, computer scientists had identified some three hundred NP-complete problems (Garey and Johnson, Computers and Intractability), and the list has grown significantly since then. These include some problems that are very familiar to humans. In 2003, Sudoku was shown to be NP-complete (Yato and Seta, “Complexity and Completeness”), as was maximizing the number of cleared rows in Tetris, even with perfect knowledge of future pieces (Demaine, Hohenberger, and Liben-Nowell, “Tetris Is Hard, Even to Approximate”). In 2012, determining whether there exists a path to the end of the level in platformer games like Super Mario Brothers was officially added to the list (Aloupis, Demaine, and Guo, “Classic Nintendo Games are (NP-) Hard”).

…

(With n = 3, this involves figuring out the path a knife would have to travel to cut three sets of points in half; if those sets of points correspond to two pieces of bread and a piece of ham, the result is a perfectly bisected sandwich.) Finding Nash equilibria is actually PPAD-complete, meaning that if there were an efficient algorithm for solving it then all other problems in the class could also be solved efficiently (including making the world’s neatest sandwiches). But being PPAD-complete is not quite so bad as being NP-complete. P, the class of efficiently solvable problems, could be equal to PPAD without being equal to NP. As of this writing the jury is still out: it’s theoretically possible that somebody could devise an efficient algorithm for finding Nash equilibria, but most experts aren’t holding their breath. “much of its credibility as a prediction”: Christos Papadimitriou, “The Complexity of Finding Nash Equilibria,” in Nisan et al., Algorithmic Game Theory.

**
21 Recipes for Mining Twitter
** by
Matthew A. Russell

Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr

en.wikipedia.org, Google Earth, natural language processing, NP-complete, social web, web application

Example 1-29 illustrates an approach for taking a NetworkX graph and discovering the cliques in it. Example 1-29. Analyzing friendship cliques (see http://github.com/ptwobrussell/Recipes-for-Mining -Twitter/blob/master/recipe__clique_analysis.py) # -*- coding: utf-8 -*import sys import json import networkx as nx 50 | The Recipes G = sys.argv[1] g = nx.read_gpickle(G) # # # # Finding cliques is a hard problem, so this could take a while for large graphs. See http://en.wikipedia.org/wiki/NP-complete and http://en.wikipedia.org/wiki/Clique_problem cliques = [c for c in nx.find_cliques(g)] num_cliques = len(cliques) clique_sizes = [len(c) for c in cliques] max_clique_size = max(clique_sizes) avg_clique_size = sum(clique_sizes) / num_cliques max_cliques = [c for c in cliques if len(c) == max_clique_size] num_max_cliques = len(max_cliques) max_clique_sets = [set(c) for c in max_cliques] people_in_every_max_clique = list(reduce(lambda x, y: x.intersection(y), max_clique_sets)) print print print print print print print print print print 'Num 'Avg 'Max 'Num cliques:', num_cliques clique size:', avg_clique_size clique size:', max_clique_size max cliques:', num_max_cliques 'People in all max cliques:' json.dumps(people_in_every_max_clique, indent=4) 'Max cliques:' json.dumps(max_cliques, indent=4) For purposes of illustration, Mining the Social Web (O’Reilly) included an analysis conducted in mid-2010 that determined the following statistics for Tim O’Reilly’s ~700 friendships: Num Avg Max Num Num cliques: 762573 clique size: 14 clique size: 26 max cliques: 6 people in every max clique: 20 Some of the more interesting insight from the analysis was that there are six different cliques of size 26 in Tim O’Reilly’s friendships, which means that those six variations of 26 people all “know” one another to the point that they were at least interested in receiving each other’s status updates in their tweet stream.

…

As an example of practical application, if you wanted to get Tim’s attention at a conference or Maker event, but can’t seem to track him down, you might start looking for one of 1.18 Analyzing Friendship Cliques | 51 these other people since there’s more than a reasonable likelihood that they’re closely connected with him. As a more technical aside, be advised that finding cliques is a hard problem in both a figurative sense, but also in the computational sense that it’s a problem that belongs to a class of problems that are known as NP-Complete. In layman’s terms, being NPComplete means that the combinatorics of the problem, whether it be finding cliques or otherwise, grow explosively as the size of the input for the problem grows. Practically speaking, this means that for a very large graph, you either need to wait a very long time to get an exact answer to the problem of finding cliques, or you need to be willing to settle for an approximate solution.

**
Data Structures & Algorithms Interview Questions You'll Most Likely Be Asked
** by
Vibrant Publishers

Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr

NP-complete, sorting algorithm, traveling salesman

Give examples of problems solved using greedy algorithms. Answer: A greedy algorithm is any algorithm that makes the local optimal choice at each stage with the hope of finding the global optimum. A classical problem which can be solved using a greedy strategy is the traveling salesman problem. Another problems that can be solved using greedy algorithms are the graph coloring problem and all the NP-complete problems. 164: Which are the pillars of a greedy algorithm? Answer: In general, greedy algorithms have five pillars: a) a candidate set, from which a solution is created b) a selection function, which chooses the best candidate to be added to the solution c) a feasibility function, that is used to determine if a candidate can be used to contribute to a solution d) an objective function, which assigns a value to a solution, or a partial solution e) a solution function, which will indicate when we have discovered a complete solution 165: What is a backtracking algorithm?

**
Data Mining in Time Series Databases
** by
Mark Last,
Abraham Kandel,
Horst Bunke

Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr

4chan, call centre, computer vision, discrete time, information retrieval, iterative process, NP-complete, p-value, pattern recognition, random walk, sensor fusion, speech recognition, web application

Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Biology, Cambridge University Press. Median Strings: A Review 191 14. Guyon, Schomaker, L., Plamondon, R., Liberman, M. and Janet, S. (1994). UNIPEN Project on On-Line Exchange and Recognizer Benchmarks. Proc. of 12th Int. Conf. on Pattern Recognition, pp. 29–33. 15. de la Higuera, C. and Casacuberta, F. (2000). Topology of Strings: Median String is NP-Complete. Theoretical Computer Science, 230(1/2), 39–48. 16. Jiang, X., Schiﬀmann, L., and Bunke, H. (2000). Computation of Median Shapes. Proc. of 4th. Asian Conf. on Computer Vision, pp. 300–305, Taipei. 17. Jiang, X., Abegglen, K., and Bunke, H. (2001). Genetic Computation of Generalized Median Strings. (Submitted for publication.) 18. Jiang, X., Abegglen, K., Bunke, H., and Csirik, J. (2002). Dynamic Computation of Generalized Median Strings, Pattern Analysis and Applications.

…

Sanchez, G., Llados, J., and Tombre, K. (2002). A Mean String Algorithm to Compute the Average Among a Set of 2D Shapes. Pattern Recognition Letters, 23(1–3), 203–213. 34. Sankoﬀ, D. and Kruskal, J.B. (1983). (eds.). Time Warps, String Edits, and Macromolecules: The Theory and Practice of String Comparison, AddisonWesley. 35. Sim, J.S. and Park, K. (2001). The Consensus String Problem for a Metric is NP-Complete. Journal of Discrete Algorithms, 2(1). 36. Stephen, G.A. (1994). String Searching Algorithms, World Scientiﬁc. 37. Subramanyan, K. and Dean, D. (2000). A Procedure to Average 3D Anatomical Structures. Medical Image Analysis, 4(4), 317–334. 38. Wagner, R.A. and Fischer, M.J. (1974). The String-to-String Correction Problem. Journal of the ACM, 21(1), 168–173. 39. Wang, L. and Jiang, Y. (1994). On the Complexity of Multiple Sequence Alignment.

**
Toast
** by
Stross, Charles

Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr

anthropic principle, Buckminster Fuller, cosmological principle, dark matter, double helix, Ernest Rutherford, Extropian, Francis Fukuyama: the end of history, glass ceiling, gravity well, Khyber Pass, Mars Rover, Mikhail Gorbachev, NP-complete, oil shale / tar sands, peak oil, performance metric, phenotype, Plutocrats, plutocrats, Ronald Reagan, Silicon Valley, slashdot, speech recognition, strong AI, traveling salesman, Turing test, urban renewal, Vernor Vinge, Whole Earth Review, Y2K

I was elbow-deep in an eviscerated PC, performing open heart surgery on a diseased network card, when the news about the traveling salesman theorem came in. Over on the other side of the office John’s terminal beeped, notification of incoming mail. A moment later my own workstation bonged. “Hey, Geoff! Get a load of this!” I carried on screwing the card back into its chassis. John is not a priority interrupt. “Someone’s come up with a proof that NP-complete problems lie in P! There’s a posting in comp.risks saying they’ve used it to find an O*(n2 ) solution to the traveling salesman problem, and it scales! Looks like April First has come early this year, doesn’t it?” I dropped the PC’s lid on the floor hastily and sat down at my workstation. Another cubed-sphere hypothesis, another flame war in the math newsgroups—or something more serious? “When did it arrive?”

…

Burgundy has gained a small reputation as an information buffer—a side-effect of its monarch’s paranoid attitude towards encroachment by other deities. It exports confidentiality, delay-line buffers, fine wines, and dissidents. In other words, it’s the last place a futures trader specializing in spread option trades that leverage developments in the artificial intelligence market—someone like me—would normally be seen dead in. On the other hand . . . As an accountant specializing in Monte Carlo solutions to NP-complete low-knowledge systems I am naturally of interest to the Queen, who made me an offer I would have considered long and hard at the best of times; in my circumstances back on Dordogne—exposed to risk, to say the least—I could hardly consider refusing her invitation. Many technologies are banned from the system, in an attempt to reduce the vulnerability of the monarchy to algorithmic complexity attacks.

**
Write Great Code, Volume 2
** by
Randall Hyde

Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr

complexity theory, Donald Knuth, locality of reference, NP-complete, premature optimization

For this reason, the optimization process is usually a case of compromise management, where you make tradeoffs and sacrifice certain subgoals (for example, running certain sections of the code a little slower) in order to create a reasonable result (for example, creating a program that doesn’t consume too much memory). 5.4.4.2 Optimization’s Effect on Compile Time You might think that it’s possible to choose a single goal (for example, highest possible performance) and optimize strictly for that. However, the compiler must also be capable of producing an executable result in a reasonable amount of time. The optimization process is an example of what complexity theory calls an NP-complete problem. These are problems that are, as far as we know, intractable. That is, a guaranteed correct result cannot be produced (for example, an optimal version of a program) without computing all possibilities and choosing the best result from those possibilities. Unfortunately, the time generally required to solve an NP-complete problem increases exponentially with the size of the input, which in the case of compiler optimization means roughly the number of lines of source code. This means that in the worst case, producing a truly optimal program would take longer than it was worth.

…

Natural offsets in an activation record, 16.5.2 Assigning Offsets to Local Variables Nested procedures, 8.5.3 Storage Allocation for Intermediate Variables new (memory allocation), 8.1.4 The Stack Section, 8.3.2 Pseudo-Static Binding and Automatic Variables, 8.3.3 Dynamic Binding and Dynamic Variables, 11.2 Pointer Implementation in High-Level Languages, 11.2 Pointer Implementation in High-Level Languages function, 8.3.3 Dynamic Binding and Dynamic Variables, 11.2 Pointer Implementation in High-Level Languages operator (C++ or Pascal), 8.1.4 The Stack Section next statement, 14.3 The goto Statement NIL, 14.5.3 Forcing Short-Circuit Boolean Evaluation in an if Statement Nodes in a call tree, 16.1.2 Other Sources of Overhead Non-numeric constants, 7.4 Manifest Constants Versus Read-Only Memory Objects Nonportable optimizations, 6.1 Background Nonscalar function return results, 16.6.2 Pass-by-Reference NOT operator, 15.2 The repeat..until (do..until/do..while) Loop NP-complete problems, 5.4.4 Optimization NULL pointer references, 8.1 Runtime Memory Organization NumberOfLinenumbers field in a COFF file, 5.6.3 COFF Section Headers NumberOfRelocations field in a COFF file, 5.6.3 COFF Section Headers NumberOfSections field in a COFF file, 5.6.1 The COFF File Header, 5.6.2 The COFF Optional Header Numeric constants, 3.4 Literal Constants O objdump, 5.8.4 Section Alignment and Library Modules, 6.3.1.3 The dumpbin.exe /headers Command-Line Option, 6.3.1.6 Other dumpbin.exe Command-Line Options, 6.3.2 The FSF/GNU objdump.exe Utility command-line options, 6.3.1.6 Other dumpbin.exe Command-Line Options utility, 5.8.4 Section Alignment and Library Modules, 6.3.1.3 The dumpbin.exe /headers Command-Line Option for GNU/Gas/GCC code, 6.3.1.3 The dumpbin.exe /headers Command-Line Option Object code files, Compiler Operation and Code Generation Object files, 5.4.4.5 Controlling Compiler Optimization, 5.5.2 Emitting Assembly Language as Compiler Output, 5.6.7 Learning More About Object File Formats, 5.7.1 Pages, Segments, and File Size as compiler output, 5.5.2 Emitting Assembly Language as Compiler Output formats, 5.6.7 Learning More About Object File Formats sections and memory consumption, 5.7.1 Pages, Segments, and File Size Object-oriented programs, overgeneralization in, 12.8.7 Classes, Objects, and Performance Objects, 12.7 Namespaces One-address machines, 13.1.2 Accumulator-Based Machines Operand sizes in assembly language, 3.8 Specifying Operand Sizes in Assembly Language, 3.8.1 Type Coercion in HLA, 4.9 The Minimal Instruction Set ambiguity (80x86), 3.8.1 Type Coercion in HLA on the PowerPC, 4.9 The Minimal Instruction Set operating system.

…

, 80x86 Assembly for the HLL Programmer, 3.4 Literal Constants, 3.4.3.2 Hexadecimal Literal Constants in Gas, 3.4.4.2 Character and String Literal Constants in Gas, 3.4.4.2 Character and String Literal Constants in Gas, 3.4.5 Floating-Point Literal Constants, 3.5.2 Manifest Constants in Gas, 3.5.2 Manifest Constants in Gas, 3.5.2 Manifest Constants in Gas, 3.6.1.1 Register Access in HLA, 3.6.1.3 Register Access in MASM and TASM, 3.6.2 Immediate Addressing Mode, 3.6.2 Immediate Addressing Mode, 3.6.4 Register Indirect Addressing Mode, 3.6.5.2 Indexed Addressing Mode in MASM and TASM, 3.6.6.1 Scaled-Indexed Addressing in HLA, 3.7.2 Data Declarations in MASM and TASM, 3.7.2 Data Declarations in MASM and TASM, 3.7.2 Data Declarations in MASM and TASM, 3.7.3 Data Declarations in Gas, 3.7.3 Data Declarations in Gas, 3.7.3.1 Accessing Byte Variables in Assembly Language, 3.8 Specifying Operand Sizes in Assembly Language, 3.8.1 Type Coercion in HLA, 6.2.1 Assembly Output from GNU and Borland Compilers, 6.2.3.2 Borland C++ Assembly Language Output, 8.5 Variable Addresses and High-level Languages = operator, 3.5.2 Manifest Constants in Gas binary constants, 3.4 Literal Constants character literal constants, 3.4.4.2 Character and String Literal Constants in Gas compatible assembly output from Borland C++, 6.2.3.2 Borland C++ Assembly Language Output constant declarations, 3.5.2 Manifest Constants in Gas data declarations, 3.7.2 Data Declarations in MASM and TASM db declarations, 3.7.2 Data Declarations in MASM and TASM dd/dword declarations, 3.7.3.1 Accessing Byte Variables in Assembly Language direct addressing mode, 3.6.2 Immediate Addressing Mode displacement-only addressing mode, 3.6.2 Immediate Addressing Mode, 8.5 Variable Addresses and High-level Languages dw declaration, 3.7.3 Data Declarations in Gas equ directive, 3.5.2 Manifest Constants in Gas floating-point literal constants, 3.4.5 Floating-Point Literal Constants hexadecimal literal constants, 3.4.3.2 Hexadecimal Literal Constants in Gas indexed addressing mode, 3.6.5.2 Indexed Addressing Mode in MASM and TASM initialized data, 3.7.2 Data Declarations in MASM and TASM mov instruction, 3.6.1.1 Register Access in HLA operand sizes, 3.8 Specifying Operand Sizes in Assembly Language register indirect addressing mode, 3.6.4 Register Indirect Addressing Mode register names, 3.6.1.3 Register Access in MASM and TASM scaled-indexed addressing modes, 3.6.6.1 Scaled-Indexed Addressing in HLA string literal constants, 3.4.4.2 Character and String Literal Constants in Gas type coercion, 3.8.1 Type Coercion in HLA word declaration, 3.7.3 Data Declarations in Gas TBL register (PowerPC), 4.3.3.3 XER Register TBU register (PowerPC), 4.3.3.3 XER Register TByte data (80x86), 3.7 Declaring Data in Assembly Language Tested code, 1.8 Characteristics of Great Code Text sections in an executable file, 5.7 Executable File Formats Textual substitution for parameters, 16.3 Macros and Inline Functions text_start field in a COFF file, 5.6.2 The COFF Optional Header Thread-safe code, 8.3.2 Pseudo-Static Binding and Automatic Variables Three-address architectures, 13.1.5 Three-Address Architectures Three-dimensional array access, 9.1.5.5 Accessing Elements of a Multidimensional Array Time Base facility (TBR) registers (PowerPC), 4.3 Basic PowerPC Architecture Time Base registers (TBL and TBU) (PowerPC), 4.3.3.3 XER Register Time required to optimize a program (NP-Completeness), 5.4.4 Optimization Time/space trade-off for macros, 16.3 Macros and Inline Functions TimeDateStamp (Windows COFF header field), 5.6.1 The COFF File Header Tokens, Compiler Operation and Code Generation, Compiler Operation and Code Generation, 5.4.1 Lexical Analysis and Tokens, 5.4.1 Lexical Analysis and Tokens, 5.4.1 Lexical Analysis and Tokens, 5.4.1 Lexical Analysis and Tokens attributes, 5.4.1 Lexical Analysis and Tokens composition, 5.4.1 Lexical Analysis and Tokens in a source file, Compiler Operation and Code Generation representing a source file, Compiler Operation and Code Generation streams, 5.4.1 Lexical Analysis and Tokens Top-of-stack, 13.1.1.1 Basic Stack Machine Organization, 13.1.1.3 Popping Data from a Stack Tracking changes to variables through a basic block, 5.4.4.3 Basic Blocks, Reducible Code, and Optimization Tracking memory use in a heap manager, 11.7 The OS and Memory Allocation Transfer of control at the machine level, Control Structures and Programmatic Decisions Translation from source code to machine code, 5.3.3 Compilers True, 7.6 Boolean Constants tsize field in a COFF file, 5.6.2 The COFF Optional Header Tuples, 12.1 Records Turbo Assembler.

**
Mining the Social Web: Finding Needles in the Social Haystack
** by
Matthew A. Russell

Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr

Climategate, cloud computing, crowdsourcing, en.wikipedia.org, fault tolerance, Firefox, full text search, Georg Cantor, Google Earth, information retrieval, Mark Zuckerberg, natural language processing, NP-complete, profit motive, Saturday Night Live, semantic web, Silicon Valley, slashdot, social graph, social web, statistical model, Steve Jobs, supply-chain management, text mining, traveling salesman, Turing test, web application

A maximal clique, on the other hand, is one that is not a subgraph of another clique. The maximum clique is also a maximal clique in that it isn’t a subgraph of any other clique. However, various other maximal cliques often exist in graphs and need not necessarily share any nodes with the maximum clique. Figure 4-2, for example, illustrates a maximum clique of size 4, but there are several other maximal cliques of size 3 in the graph as well. Finding cliques is an NP-complete problem (implying an exponential runtime), but NetworkX does provide the find_cliques method, which delivers a solid implementation that takes care of the heavy lifting. Just be advised that it might take a long time to run as graphs get beyond a reasonably small size. Figure 4-2. An example graph containing a maximum clique of size 4 Example 4-11 could be modified in any number of ways to compute interesting things, so long as you’ve first harvested the necessary friendship information.

…

Assuming you’ve first constructed a graph as demonstrated in the previous section, it shows you how to use some of the NetworkX APIs for finding and analyzing cliques. Example 4-11. Using NetworkX to find cliques in graphs (friends_followers__clique_analysis.py) # -*- coding: utf-8 -*- import sys import json import networkx as nx G = sys.argv[1] g = nx.read_gpickle(G) # Finding cliques is a hard problem, so this could # take awhile for large graphs. # See http://en.wikipedia.org/wiki/NP-complete and # http://en.wikipedia.org/wiki/Clique_problem cliques = [c for c in nx.find_cliques(g)] num_cliques = len(cliques) clique_sizes = [len(c) for c in cliques] max_clique_size = max(clique_sizes) avg_clique_size = sum(clique_sizes) / num_cliques max_cliques = [c for c in cliques if len(c) == max_clique_size] num_max_cliques = len(max_cliques) max_clique_sets = [set(c) for c in max_cliques] people_in_every_max_clique = list(reduce(lambda x, y: x.intersection(y), max_clique_sets)) print 'Num cliques:', num_cliques print 'Avg clique size:', avg_clique_size print 'Max clique size:', max_clique_size print 'Num max cliques:', num_max_cliques print print 'People in all max cliques:' print json.dumps(people_in_every_max_clique, indent=4) print print 'Max cliques:' print json.dumps(max_cliques, indent=4) Sample output from the script follows for Tim O’Reilly’s 600+ friendships and reveals some interesting insights.

**
In Pursuit of the Traveling Salesman: Mathematics at the Limits of Computation
** by
William J. Cook

Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr

complexity theory, computer age, Computer Numeric Control, four colour theorem, index card, John von Neumann, linear programming, NP-complete, p-value, RAND corporation, Richard Feynman, Richard Feynman, traveling salesman, Turing machine

Lenstra et al., eds. History of Mathematical Programming—A Collection of Personal Reminiscences. NorthHolland. 32–54. [11] Flood, M. 1954. Operations research and logistics. Proceedings of the First Ordnance Conference on Operations Research. Office of Ordnance Research, Durham, North Carolina. 3–32. [12] Garey, M. R., D. S. Johnson. 1979. Computers and Intractability: A Guide to the Theory of NP-Completeness. Freeman, San Francisco, California. [13] Gomory, R. E. 1966. The traveling salesman problem. Proceedings of the IBM Scientific Computing Symposium on Combinatorial Problems. IBM, White Plains, New York. 93–121. [14] Grötschel, M., O. Holland. 1991. Solution of large-scale symmetric travelling salesman problems. Mathematical Programming 51, 141–202. 224 Bibliography [15] Held, M., R. M. Karp. 1962.

**
The Art of Computer Programming
** by
Donald Ervin Knuth

Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr

Brownian motion, complexity theory, correlation coefficient, Donald Knuth, Eratosthenes, Georg Cantor, information retrieval, Isaac Newton, iterative process, John von Neumann, Louis Pasteur, mandelbrot fractal, Menlo Park, NP-complete, P = NP, Paul Erdős, probability theory / Blaise Pascal / Pierre de Fermat, RAND corporation, random walk, sorting algorithm, Turing machine, Y2K

., bs are any addition chains, then J(]T djcubj) < r + 5 + ]T aj — 1 for any (r + 1) x E + 1) matrix of nonnegative integers dj. 41. [SICOMP 10 A981), 638-646.] The stated formula can be proved whenever A > 9m2. Since this is a polynomial in m, and since the problem of finding a minimum vertex cover is NP-hard (see Section 7.9), the problem of computing l(ni,..., nm) is NP-complete. [It is unknown whether or not the problem of computing l(n) is NP- complete. But it seems plausible that an optimum chain for, say, X^fcTcI ^fc+i2Afe2 would entail an optimum chain for {ni,... ,nm}, when A is sufficiently large.] SECTION 4.6.4 1. Set y «— x2, then compute ((... (u2n+iy + U2n-i)y + • • ¦ )y + U\)x. 2. Replacing x in B) by the polynomial x + xo leads to the following procedure: Gl. Do step G2 for k = n, n — 1, ..., 0 (in this order), and stop.

…

He has extended the results to multivariate polynomials in SICOMP 9 A980), 225-229. The tensor rank of an arbitrary m x n x 2 tensor in a suitably large field has been determined by Joseph Ja'Ja', SICOMP 8 A979), 443-462; JACM 27 A980), 822-830. See also his interesting discussion of commutative bilinear forms in SICOMP 9 A980), 713-728. However, the problem of computing the tensor rank of an arbitrary nxnx n tensor over any finite field is NP-complete [J. Hastad, Journal of Algorithms 11 A990), 644-654]. 4.6.4 EVALUATION OF POLYNOMIALS 515 For further reading. In this section we have barely scratched the surface of a very large subject in which many beautiful theories are emerging. Considerably more comprehensive treatments can be found in the books Computational Com- Complexity of Algebraic and Numeric Problems by A. Borodin and I. Munro (New York: American Elsevier, 1975); Polynomial and Matrix Computations 1 by D.

…

Normal distribution, 56, 122, 384, 565. tail of, 139. variations, 132, 139. Normal evaluation schemes, 506, 709-710. Normal numbers, 177. Normalization of divisors, 272-273, 282-283. Normalization of floating point numbers, 215-217, 227-228, 238, 248-249, 254, 616. Normand, Jean-Marie, 29. Norton, Graham Hilton, 372, 673. Norton, Karl Kenneth, 383. Norton, Victor Thane, Jr., 607. Notations, index to, 730-734. Nozaki, Akihiro (ff*§Hg3A), 524. NP-complete problems, 499, 514, 585, 698. Null space of a matrix, 443-444, 456, 659-660, 681. Number field sieve, 403. Number fields, 331, 333, 345, 403, 674. Number sentences, 605. Number system: A language for representing numbers. balanced binary, 213. balanced decimal, 211. balanced mixed-radix, 103, 293, 631. balanced ternary, 207-208, 209, 227, 283, 353. binary (radix 2), 195, 198-206, 209-213, 419, 461, 483. combinatorial, 209. complex, 205-206, 209-210, 292. decimal (= denary, radix ten), 197—199, 210, 320-326, 374. duodecimal (radix twelve), 199-200. factorial, 66, 209.

**
The Debian Administrator's Handbook, Debian Wheezy From Discovery to Mastery
** by
Raphaal Hertzog,
Roland Mas

Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr

bash_history, Debian, distributed generation, en.wikipedia.org, failed state, Firefox, GnuPG, Google Chrome, Jono Bacon, NP-complete, QWERTY keyboard, RFC: Request For Comment, Richard Stallman, Skype, SpamAssassin, Valgrind, web application, x509 certificate, zero day, Zimmermann PGP

With packages all depending upon each other, it is sometimes necessary to migrate a large number of packages simultaneously, which is impossible when some are uploading updates regularly. On the other hand, the script identifying the families of related packages works hard to create them (this would be an NP-complete problem, for which, fortunately, we know some good heuristics). This is why we can manually interact with and guide this script by suggesting groups of packages, or imposing the inclusion of certain packages in a group, even if this temporarily breaks some dependencies. This functionality is accessible to the Release Managers and their assistants. Recall that an NP-complete problem is of an exponential algorithmic complexity according to the size of the data, here being the length of the code (the number of figures) and the elements involved. The only way to resolve it is frequently to examine all possible configurations, which could require enormous means.

**
Atrocity Archives
** by
Stross, Charles

Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr

airport security, anthropic principle, Berlin Wall, brain emulation, British Empire, Buckminster Fuller, defense in depth, disintermediation, experimental subject, glass ceiling, haute cuisine, hypertext link, Khyber Pass, mandelbrot fractal, Menlo Park, NP-complete, the medium is the message, Y2K, yield curve

Anyway, the theorem has been rediscovered periodically ever since; it has also been suppressed efficiently, if a little bit less violently, because nobody wants it out in the open where Joe Random Cypherpunk can smear it across the Internet. The theorem is a hack on discrete number theory that simultaneously disproves the Church-Turing hypothesis (wave if you understood that) and worse, permits NP-complete problems to be converted into P-complete ones. This has several consequences, starting with screwing over most cryptography algorithms--translation: all your bank account are belong to us--and ending with the ability to computationally generate a Dho-Nha geometry curve in real time. This latter item is just slightly less dangerous than allowing nerds with laptops to wave a magic wand and turn them into hydrogen bombs at will.

**
Superintelligence: Paths, Dangers, Strategies
** by
Nick Bostrom

Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr

agricultural Revolution, AI winter, Albert Einstein, algorithmic trading, anthropic principle, anti-communist, artificial general intelligence, autonomous vehicles, barriers to entry, Bayesian statistics, bioinformatics, brain emulation, cloud computing, combinatorial explosion, computer vision, cosmological constant, dark matter, DARPA: Urban Challenge, data acquisition, delayed gratification, demographic transition, Donald Knuth, Douglas Hofstadter, Drosophila, Elon Musk, en.wikipedia.org, endogenous growth, epigenetics, fear of failure, Flash crash, Flynn Effect, friendly AI, Gödel, Escher, Bach, income inequality, industrial robot, informal economy, information retrieval, interchangeable parts, iterative process, job automation, John Markoff, John von Neumann, knowledge worker, Menlo Park, meta analysis, meta-analysis, mutually assured destruction, Nash equilibrium, Netflix Prize, new economy, Norbert Wiener, NP-complete, nuclear winter, optical character recognition, pattern recognition, performance metric, phenotype, prediction markets, price stability, principal–agent problem, race to the bottom, random walk, Ray Kurzweil, recommendation engine, reversible computing, social graph, speech recognition, Stanislav Petrov, statistical model, stem cell, Stephen Hawking, strong AI, superintelligent machines, supervolcano, technological singularity, technoutopianism, The Coming Technological Singularity, The Nature of the Firm, Thomas Kuhn: the structure of scientific revolutions, transaction costs, Turing machine, Vernor Vinge, Watson beat the top human players on Jeopardy!, World Values Survey, zero-sum game

is a televised game show with trivia questions about history, literature, sports, geography, pop culture, science, and other topics. Questions are presented in the form of clues, and often involve wordplay. Poker Varied Computer poker players remain slightly below the best humans for full-ring Texas hold ‘em but perform at a superhuman level in some poker variants.52 FreeCell Superhuman Heuristics evolved using genetic algorithms produce a solver for the solitaire game FreeCell (which in its generalized form is NP-complete) that is able to beat high-ranking human players.53 Go Very strong amateur level As of 2012, the Zen series of go-playing programs has reached rank 6 dan in fast games (the level of a very strong amateur player), using Monte Carlo tree search and machine learning techniques.54 Go-playing programs have been improving at a rate of about 1 dan/year in recent years. If this rate of improvement continues, they might beat the human world champion in about a decade.

pages: 1,737 words: 491,616

**
Rationality: From AI to Zombies
** by
Eliezer Yudkowsky

Albert Einstein, Alfred Russel Wallace, anthropic principle, anti-pattern, anti-work, Arthur Eddington, artificial general intelligence, availability heuristic, Bayesian statistics, Berlin Wall, Build a better mousetrap, Cass Sunstein, cellular automata, cognitive bias, cognitive dissonance, correlation does not imply causation, cosmological constant, creative destruction, Daniel Kahneman / Amos Tversky, dematerialisation, discovery of DNA, Douglas Hofstadter, Drosophila, effective altruism, experimental subject, Extropian, friendly AI, fundamental attribution error, Gödel, Escher, Bach, hindsight bias, index card, index fund, Isaac Newton, John Conway, John von Neumann, Long Term Capital Management, Louis Pasteur, mental accounting, meta analysis, meta-analysis, money market fund, Nash equilibrium, Necker cube, NP-complete, P = NP, pattern recognition, Paul Graham, Peter Thiel, Pierre-Simon Laplace, placebo effect, planetary scale, prediction markets, random walk, Ray Kurzweil, reversible computing, Richard Feynman, Richard Feynman, risk tolerance, Rubik’s Cube, Saturday Night Live, Schrödinger's Cat, scientific mainstream, sensible shoes, Silicon Valley, Silicon Valley startup, Singularitarianism, Solar eclipse in 1919, speech recognition, statistical model, Steven Pinker, strong AI, technological singularity, The Bell Curve by Richard Herrnstein and Charles Murray, the map is not the territory, the scientific method, Turing complete, Turing machine, ultimatum game, X Prize, Y Combinator, zero-sum game

If you limit yourself to simple Boolean propositions of the form ((A or B or C) and (B or ¬C or D) and (D or ¬A or ¬C)…), conjunctions of disjunctions of three variables, then this is a very famous problem called 3-SAT, which is one of the first problems ever to be proven NP-complete. So just because you don’t see a contradiction in the Zombie World at first glance, it doesn’t mean that no contradiction is there. It’s like not seeing a contradiction in the Riemann Hypothesis at first glance. From conceptual possibility (“I don’t see a problem”) to logical possibility, in the full technical sense, is a very great leap. It’s easy to make it an NP-complete leap, and with first-order theories you can make it arbitrarily hard to compute even for finite questions. And it’s logical possibility of the Zombie World, not conceptual possibility, that is needed to suppose that a logically omniscient mind could know the positions of all the atoms in the universe, and yet need to be told as an additional non-entailed fact that we have inner listeners.

…

Chalmers, The Conscious Mind. 222 Zombie Responses I’m a bit tired today, having stayed up until 3 a.m. writing yesterday’s >6,000-word essay on zombies, so today I’ll just reply to Richard, and tie up a loose end I spotted the next day. (A) Richard Chappell writes: A terminological note (to avoid unnecessary confusion): what you call “conceivable,” others of us would merely call “apparently conceivable.” The gap between “I don’t see a contradiction yet” and “this is logically possible” is so huge (it’s NP-complete even in some simple-seeming cases) that you really should have two different words. As the zombie argument is boosted to the extent that this huge gap can be swept under the rug of minor terminological differences, I really think it would be a good idea to say “conceivable” versus “logically possible” or maybe even have a still more visible distinction. I can’t choose professional terminology that has already been established, but in a case like this, I might seriously refuse to use it.

**
Masterminds of Programming: Conversations With the Creators of Major Programming Languages
** by
Federico Biancuzzi,
Shane Warden

Amazon: amazon.com — amazon.co.uk — amazon.de — amazon.fr

Benevolent Dictator For Life (BDFL), business intelligence, business process, cellular automata, cloud computing, commoditize, complexity theory, conceptual framework, continuous integration, data acquisition, domain-specific language, Douglas Hofstadter, Fellow of the Royal Society, finite state, Firefox, follow your passion, Frank Gehry, general-purpose programming language, Guido van Rossum, HyperCard, information retrieval, iterative process, John von Neumann, Larry Wall, linear programming, loose coupling, Mars Rover, millennium bug, NP-complete, Paul Graham, performance metric, Perl 6, QWERTY keyboard, RAND corporation, randomized controlled trial, Renaissance Technologies, Ruby on Rails, Sapir-Whorf hypothesis, Silicon Valley, slashdot, software as a service, software patent, sorting algorithm, Steve Jobs, traveling salesman, Turing complete, type inference, Valgrind, Von Neumann architecture, web application

Brian maintained a log of major decisions we made as we designed the language. I found his log invaluable. Computer Science What constitutes research in computer science? Al: This is a wonderful question, and one that does not have a well-defined answer. I think computer science research has broadened enormously in its scope. We still have the deep, unsolved, quintessential questions of computer science: how do we prove that a problem like factoring or an NP-complete problem is actually hard; how do we model complex systems like a human cell or the human brain; how can we construct scalable, trustworthy systems; how can programmers build arbitrarily reliable software; how can we make software with human-friendly characteristics like emotion or intelligence; how far can we extend Moore’s Law? Today, the scale and scope of computing has exploded. We are trying to organize and access all of the world’s information, and computers and computation are affecting all areas of daily life.